Android Camera learning summary

In order to understand Android Camera more comprehensively, I decided to sort out the workflow from camera APP to HAL layer. The basic platform is qcom sdm845. The kernel layer has not been studied, and I will find time to study it later.

First, you can get a more comprehensive understanding of Android Camera knowledge from https://source.android.google.cn/devices/camera . We only make a brief analysis of a few links.

1.1 The main process of the application layer

The main process of the application layer is as follows
Insert picture description here

The main application process is:

  1. Surface resource preparation
  2. openCamera process
  3. createSession process
  4. Preview and capture application process
  5. Frame data reception, the main purposes are: Jepg data reception, preview frame display, texture update, etc.

Let's analyze the flow of the analysis above one by one

2.1 Surface resource preparation

In the Camera system, Surface is not only a display control, but also a carrier for receiving and transmitting frame data.
First Camera System need surface.dequeueBufferto get the graphics buffer(parent ANativeWindowBuffer). Then pass the buffer to the Camera system. When the Camera system receives frame data, it fills the buffer with surface.queueBufferthe frame data , and then returns the frame data to surface 1 in the call .

There are many types of surfaces, such as SurfaceView, TextureVew, SurfaceTexture, ImageReader, etc. We will not analyze the creation process in the application.

2.2 openCamera process

The brief flow chart is as follows:
Insert picture description here
openCamera mainly involves three processes

  1. The application process, FrameworkAPI belongs to the process
  2. CameraService process
  3. CameraProvider process, camx belongs to this process

We will not analyze the detailed process step by step. We will focus on the main implementation process of openCamera in CamX.

OpenCamera mainly includes two processes in CamX

  1. Create a HALDevice object and initialize its member variable m_camera3Device, m_camera3Device is a camera3_device_t structure object (redefine a structure Camera3Device that is the same as camera3_device_t in CamX, we can think of it as exactly the same as camera3_device_t).
CamxResult HALDevice::Initialize(
    const HwModule* pHwModule,
    UINT32          cameraId)
{
    
    
    ....
    if (CamxResultSuccess == result)
    {
    
    
    //初始化m_camera3Device结构体对象
        m_camera3Device.hwDevice.tag     = HARDWARE_DEVICE_TAG; 
        m_camera3Device.hwDevice.version = CAMERA_DEVICE_API_VERSION_3_3;
        m_camera3Device.hwDevice.close   = reinterpret_cast<CloseFunc>(GetHwDeviceCloseFunc());
        m_camera3Device.pDeviceOps       = reinterpret_cast<Camera3DeviceOps*>(GetCamera3DeviceOps());
        m_camera3Device.pPrivateData     = this;
        // NOWHINE CP036a: Need exception here
        m_camera3Device.hwDevice.pModule = const_cast<HwModule*>(pHwModule);
    //初始化m_HALCallbacks结构体对象并传递给CHI
        m_HALCallbacks.ProcessCaptureResult = ProcessCaptureResult;
        m_HALCallbacks.NotifyResult         = Notify;
        CamX::ChiOverrideBypass(&m_HALCallbacks);
    }
    return result;
}
  1. Initialize the HALDevice member variable m_pCamera3CbOps, m_pCamera3CbOps type isCamera3CbOps
int initialize(
    const struct camera3_device*    pCamera3DeviceAPI,
    const camera3_callback_ops_t*   pCamera3CbOpsAPI)
{
    
    
 ...
 if (NULL != pCamera3CbOps)
 {
    
    
     pCamera3CbOps->cbOps.process_capture_result = process_capture_result;
     pCamera3CbOps->cbOps.notify = notify;
     pCamera3CbOps->pCamera3Device = pCamera3DeviceAPI;
     //将CameraProvider传递的指针暂存到pCbOpsAPI 中
     pCamera3CbOps->pCbOpsAPI = pCamera3CbOpsAPI;
     //给pCamera3CbOpsAPI 赋新的指针
     pCamera3CbOpsAPI = &(pCamera3CbOps->cbOps);
 }
 //将新指针设置到HALDevice对象m_pCamera3CbOps成员
 //这样做的目的是在调用CameraProvider的process_capture_result时
 //先打印一些帧信息,可以理解为复写函数
 return pHAL3->initialize(pCamera3DeviceAPI, pCamera3CbOpsAPI);
}

2.3 createSession process

After openCamera is successful, CameraManager will return the CameraDeviceImpl class object to CameraAPP, and then call the
createCaptureSession method to complete the creation of the Stream. createCaptureSession finally completes the creation of Stream through configureStreamsChecked, the code is as follows:

//CameraDeviceImpl.java
public boolean configureStreamsChecked(InputConfiguration inputConfig,
        List<OutputConfiguration> outputs, int operatingMode)
                throws CameraAccessException {
    
    
    ....
    synchronized(mInterfaceLock) {
    
    
        ....
        //configStream之前首先需要停预览
        stopRepeating();

        try {
    
    
            //在configStream之前,需要等待Camera为IDLE状态
            waitUntilIdle();
            //mRemoteDevice为BpCameraDeviceUser代理对象
            //与其对应的BnCameraDeviceUser为CameraService中的
            //CameraDeviceClient
            //向CameraService申请开始config
            mRemoteDevice.beginConfigure();
            ....
            // Delete all streams first (to free up HW resources)
            for (Integer streamId : deleteList) {
    
    
                //向CameraService申请删除streamId应用Stream
                mRemoteDevice.deleteStream(streamId);
                mConfiguredOutputs.delete(streamId);
            }
            // Add all new streams
            for (OutputConfiguration outConfig : outputs) {
    
    
                if (addSet.contains(outConfig)) {
    
    
                    //向CameraService申请创建一个新的Stream
                    //outConfig包含了创建Stream必需的Surface
                    //Surface的大小尺寸格式会决定Stream的大小尺寸格式
                    //创建成功后会返回创建的streamId
                    int streamId = mRemoteDevice.createStream(outConfig);
                    mConfiguredOutputs.put(streamId, outConfig);
                }
            }
            operatingMode = (operatingMode | (customOpMode << 16));
            //向CameraService申请完成创建Stream
            //operatingMode?
            //CameraService会根据上边创建的Stream
            //向Camx申请创建HAL层Stream
            mRemoteDevice.endConfigure(operatingMode);
            success = true;
        } catch (IllegalArgumentException e) {
    
    
            .....
        } catch (CameraAccessException e) {
    
    
            ....
        } finally {
    
    
            if (success && outputs.size() > 0) {
    
    
                mDeviceHandler.post(mCallOnIdle);
            }
        }
    }

    return success;
}

According to the above function, the flow of configuration flow is as follows:

  1. beginConfigure
  2. deleteStream/createStream
  3. endConfigure

We analyze one by one, its realization

2.3.1 beginConfigure process

CameraService does not do anything to beginConfigure, the code is as follows:

binder::Status CameraDeviceClient::beginConfigure() {
    
    
    ALOGV("%s: Not implemented yet.", __FUNCTION__);
    return binder::Status::ok();
}

2.3.2 deleteStream/createStream process

Let's first introduce the createStream process

2.3.2.1 createStream process

Insert picture description here

2.4 Preview and capture application process

to be continued. . . . .


  1. For more detailed introduction, please refer to the application, delivery and return process of Android graphics buffer in CameraService, CameraProvider, and CameraHAL . ↩︎

Guess you like

Origin blog.csdn.net/u010116586/article/details/106910001