Android camera library CameraView source code analysis (5): saving filter effects

1 Introduction

During this period, I am usingnatario1/CameraView to implement filtered 预览, a> authors have never fixed it. , some of has entered the deep water area, and it gradually appears that it cannot meet our needs. However, as the project continues to deepen, the use of the packaging is quite in place, it indeed saved us a lot of time in the early stages of the project. Since function. 拍照, 录像
CameraView
CameraView
GithubissuesBUG

What should we do? The project urgently needs to implement related functions, so we can only bite the bullet and read its source code to solve these problems.
In the previous article, we had a general understanding of the entire process of taking photos with filters. In this article, we focus on how to save filters. Effect.

The following source code analysis is based onCameraView 2.7.2

implementation("com.otaliastudios:cameraview:2.7.2")

In order to better display on the blog, the code posted in this article has been partially streamlined.

Insert image description here

Drawing and saving is part 5 of the method in the SnapshotGlPictureRecorder class. takeFrame()

// 5. Draw and save
long timestampUs = surfaceTexture.getTimestamp() / 1000L;
LOG.i("takeFrame:", "timestampUs:", timestampUs);
mTextureDrawer.draw(timestampUs);
if (mHasOverlay) mOverlayDrawer.render(timestampUs);
mResult.data = eglSurface.toByteArray(Bitmap.CompressFormat.JPEG);

This part of the code does two things:

  • mTextureDrawer.draw: Draw filter
  • eglSurface.toByteArray : 转化为JPEGFormalByteNumber combination

2. Draw filters

Let’s look at it firstmTextureDrawer.draw(), mTextureDrawerthe previous article we have already introduced. Through it, we finally will be calledmFilter.draw()

public void draw(final long timestampUs) {
    
    
    if (mPendingFilter != null) {
    
    
        release();
        mFilter = mPendingFilter;
        mPendingFilter = null;

    }

    if (mProgramHandle == -1) {
    
    
        mProgramHandle = GlProgram.create(
                mFilter.getVertexShader(),
                mFilter.getFragmentShader());
        mFilter.onCreate(mProgramHandle);
        Egloo.checkGlError("program creation");
    }

    GLES20.glUseProgram(mProgramHandle);
    Egloo.checkGlError("glUseProgram(handle)");
    mTexture.bind();
    mFilter.draw(timestampUs, mTextureTransform);
    mTexture.unbind();
    GLES20.glUseProgram(0);
    Egloo.checkGlError("glUseProgram(0)");
}

You can see that inmTextureDrawer.draw(), the calling sequence is as follows

  1. CallGlProgram.create()Create oneOpenGL Program
  2. Call in FilteronCreate()
  3. GLES20.glUseProgram(), enable thisProgram
  4. tuningmTexture.bind(),mTextureisGlTexture, this is the main thingTexture
  5. Returning trainingFilterTargetonDrawMethod
  6. Finally, callmTexture.unbindunbind

Here we focus on mFilter.onDraw, which is the in the Filter interface mentioned above. Therefore, the drawing of taking pictures is performed in this area. onDraw

3.JPEGByte

After using OpenGL to draw to the filter, look at the next eglSurface.toByteArray()
here eglSurface is a>EglSurface, you can see that it calls toOutputStream internally, and finally returns ByteArray.

public fun toByteArray(format: Bitmap.CompressFormat = Bitmap.CompressFormat.PNG): ByteArray {
    
    
    val stream = ByteArrayOutputStream()
    stream.use {
    
    
        toOutputStream(it, format)
        return it.toByteArray()
    }
}

toOutputStreamGLES20.glReadPixels is called in GPU frame buffer. , and its function is to read pixel data from the

Specifically, this function can read the pixel data on the current frame buffer or texture mapped to the frame buffer, and write these pixel data into the memory buffer. This is the function provided by OpenGL for reading pixel data from the framebuffer. When using the glReadPixels() function to capture screenshots, you generally need to first create a buffer object with a size equal to the screen resolution and associate it with the PBO. Then, the pixel data in the frame buffer is read by calling the glReadPixels() function and stored in the PBO. Finally, you can use standard C/C++ syntax to save the pixel data in the PBO as an image file, or perform other processing and analysis.

public fun toOutputStream(stream: OutputStream, format: Bitmap.CompressFormat = Bitmap.CompressFormat.PNG) {
    
    
    if (!isCurrent()) throw RuntimeException("Expected EGL context/surface is not current")
    
    val width = getWidth()
    val height = getHeight()
    val buf = ByteBuffer.allocateDirect(width * height * 4)
    buf.order(ByteOrder.LITTLE_ENDIAN)
    GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, buf)
    Egloo.checkGlError("glReadPixels")
    buf.rewind()
    val bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
    bitmap.copyPixelsFromBuffer(buf)
    bitmap.compress(format, 90, stream)
    bitmap.recycle()
}

After callsGLES20.glReadPixels, the pixel data will be stored inbuf, and then created through and return. bufBitmapBitmap

4. Distribute callbacks

Finally, calldispatchResultdistribute callback

protected void dispatchResult() {
    
    
    if (mListener != null) {
    
    
        mListener.onPictureResult(mResult, mError);
        mListener = null;
        mResult = null;
    }
}

implementsPictureResultListenerinterfaceCameraBaseEngine

public void onPictureResult(PictureResult.Stub result, Exception error) {
    
    
    mPictureRecorder = null;
    if (result != null) {
    
    
        getCallback().dispatchOnPictureTaken(result);
    } else {
    
    
        getCallback().dispatchError(new CameraException(error,
                CameraException.REASON_PICTURE_FAILED));
    }
}

You can see that the call heregetCallback().dispatchOnPictureTaken() will eventually be calledCameraView.dispatchOnPictureTaken()

@Override
public void dispatchOnPictureTaken(final PictureResult.Stub stub) {
    
    
    LOG.i("dispatchOnPictureTaken", stub);
    mUiHandler.post(new Runnable() {
    
    
        @Override
        public void run() {
    
    
            PictureResult result = new PictureResult(stub);
            for (CameraListener listener : mListeners) {
    
    
                listener.onPictureTaken(result);
            }
        }
    });
}

Here it will traversemListeners and then call the onPictureTaken method.
When was mListeners added? CameraView has a addCameraListener method specifically for to add callbacks.

public void addCameraListener(CameraListener cameraListener) {
    
    
    mListeners.add(cameraListener);
}

5. Set callback

So as long as we add this callback and implement the onPictureTaken method, we can obtain the image information after taking the photo in onPictureTaken().

binding.cameraView.addCameraListener(object : CameraListener() {
    
    
    override fun onPictureTaken(result: PictureResult) {
    
    
        super.onPictureTaken(result)
        //拍照回调
        val bitmap = BitmapFactory.decodeByteArray(result.data, 0, result.data.size)
        bitmap?.also {
    
    
            Toast.makeText(this@Test2Activity, "拍照成功", Toast.LENGTH_SHORT).show()
            //将Bitmap设置到ImageView上
            binding.img.setImageBitmap(it)
            
            val file = getNewImageFile()
            //保存图片到指定目录
            ImageUtils.save(it, file, Bitmap.CompressFormat.JPEG)
        }
    }
})

6. Others

6.1 CameraView source code analysis series

Android Camera Library CameraView Source Code Analysis (1): Preview-CSDN Blog
Android Camera Library CameraView Source Code Analysis (2): Taking Photos-CSDN Blog
Android camera library CameraView source code analysis (3): Filter related class description-CSDN Blog
Android camera library CameraView source code analysis (4): Taking photos with filters-CSDN Blog< a i=4>Android camera library CameraView source code analysis (5): Saving filter effects-CSDN Blog

Guess you like

Origin blog.csdn.net/EthanCo/article/details/134691849