Detailed explanation of Android 13 Camera preview process

Welcome to follow the WeChat public account Wuxian

Environment introduction

environment illustrate
hardware equipment Google Pixel5
AOSP version android-13.0.0_r40
CameraProvider [email protected]_64
Camera HAL Reference implementation camera.v4l2 provided by Google

The above environment seems a bit strange, because [email protected] is no longer used in Android 13. The reason why we use this for analysis is because the AOSP version currently used by many companies is not the latest. This version 2.4 It is used more often. Of course, after we analyze 2.4, we will also analyze the latest camera.provider process based on AIDL. What needs to be explained to you is that since the HAL layer code manufacturer is not open source, the Pixel 5 mobile phone in my hand uses a Qualcomm chip, and the source code of the Camera HAL part cannot be seen, so I used Google’s reference implementation camera.v4l2 to analyze the process. The environment configuration in the above table did not really run on Pixel5 (I tried to make it run normally, and finally found that Qualcomm's Camera HAL did not use the /dev/video node when previewing, and camera.v4l2 was implemented based on this node. , so it cannot be adjusted), we will sort out this article according to the diagram and the code flow.

process analysis

The core API calls of the preview process are basically in the function below. We will analyze the key processes in detail.

private void createCameraPreviewSession() {
    
    
        try {
    
    
            SurfaceTexture texture = mTextureView.getSurfaceTexture();
            assert texture != null;

            // We configure the size of default buffer to be the size of camera preview we want.
            texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());

            // This is the output Surface we need to start preview.
            // 创建Surface对象,包含了显示控件TextureView中的SurfaceTexture对象,
            // 而SurfaceTexture又包含了预览的分辨率大小。
            Surface surface = new Surface(texture);

            // We set up a CaptureRequest.Builder with the output Surface.
            // 其实下面的过程主要分为两步,一是创建CameraCaptureSession对象;
            // 二是创建CaptureRequest对象,不过创建之前要先设置一些参数
            mPreviewRequestBuilder
                    = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
            mPreviewRequestBuilder.addTarget(surface);

            // Here, we create a CameraCaptureSession for camera preview.
            mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
                    new CameraCaptureSession.StateCallback() {
    
    

                        @Override
                        public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
    
    
                            // The camera is already closed
                            if (null == mCameraDevice) {
    
    
                                return;
                            }

                            // When the session is ready, we start displaying the preview.
                            mCaptureSession = cameraCaptureSession;
                            try {
    
    
                                // Auto focus should be continuous for camera preview.
                                mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,
                                        CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
                                // Flash is automatically enabled when necessary.
                                setAutoFlash(mPreviewRequestBuilder);

                                // Finally, we start displaying the camera preview.
                                mPreviewRequest = mPreviewRequestBuilder.build();
                                // 这里使用创建好的CameraCaptureSession对象和CaptureRequest对象
                                // 开始预览操作的调用
                                mCaptureSession.setRepeatingRequest(mPreviewRequest,
                                        mCaptureCallback, mBackgroundHandler);
                            } catch (CameraAccessException e) {
    
    
                                e.printStackTrace();
                            }
                        }

                        @Override
                        public void onConfigureFailed(
                                @NonNull CameraCaptureSession cameraCaptureSession) {
    
    
                            showToast("Failed");
                        }
                    }, null
            );
        } catch (CameraAccessException e) {
    
    
            e.printStackTrace();
        }
    }

Create CameraCaptureSession object

We mainly look at the implementation of createCaptureSession. If the execution is successful, the CameraCaptureSession object will be obtained in the callback function.

// frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java
@Override
// outputs: 这里我们传入了两个surface对象,分别是预览和拍照的
// callback: 执行结果的回调,成功后会在onConfigured中传回CameraCaptureSession对象
// handler: 该函数执行在哪个是线程上
public void createCaptureSession(List<Surface> outputs,
        CameraCaptureSession.StateCallback callback, Handler handler)
        throws CameraAccessException {
    
    
    // 以surface对象为参数创建OutputConfiguration对象,并构建成列表
    List<OutputConfiguration> outConfigurations = new ArrayList<>(outputs.size());
    for (Surface surface : outputs) {
    
    
        outConfigurations.add(new OutputConfiguration(surface));
    }
	// 继续调用这个封装函数
    createCaptureSessionInternal(null, outConfigurations, callback,
            checkAndWrapHandler(handler), /*operatingMode*/ICameraDeviceUser.NORMAL_MODE,
            /*sessionParams*/ null);
}


// inputConfig: null
// outputConfigurations: 对surface的封装
// callback: 同上
// executor: 对handler的封装,前面文章已经讲过
// operatingMode: ICameraDeviceUser.NORMAL_MODE
// sessionParams: null
private void createCaptureSessionInternal(InputConfiguration inputConfig,
        List<OutputConfiguration> outputConfigurations,
        CameraCaptureSession.StateCallback callback, Executor executor,
        int operatingMode, CaptureRequest sessionParams) throws CameraAccessException {
    
    
    // 记录开始时间
    long createSessionStartTime = SystemClock.uptimeMillis();
    synchronized(mInterfaceLock) {
    
    
        if (DEBUG) {
    
    
            Log.d(TAG, "createCaptureSessionInternal");
        }

        checkIfCameraClosedOrInError();

		// isConstrainedHighSpeed为false,当调用createConstrainedHighSpeedCaptureSession这个函数时
		// 这里为true,这个后面会单开章节来讲
        boolean isConstrainedHighSpeed =
                (operatingMode == ICameraDeviceUser.CONSTRAINED_HIGH_SPEED_MODE);
        // isConstrainedHighSpeed为true时,inputConfig必须为空,inputConfig干嘛的,后面会讲
        if (isConstrainedHighSpeed && inputConfig != null) {
    
    
            throw new IllegalArgumentException("Constrained high speed session doesn't support"
                    + " input configuration yet.");
        }

		// 先释放一些旧的资源
        // Notify current session that it's going away, before starting camera operations
        // After this call completes, the session is not allowed to call into CameraDeviceImpl
        if (mCurrentSession != null) {
    
    
            mCurrentSession.replaceSessionClose();
        }

        if (mCurrentExtensionSession != null) {
    
    
            mCurrentExtensionSession.release(false /*skipCloseNotification*/);
            mCurrentExtensionSession = null;
        }

        if (mCurrentAdvancedExtensionSession != null) {
    
    
            mCurrentAdvancedExtensionSession.release(false /*skipCloseNotification*/);
            mCurrentAdvancedExtensionSession = null;
        }

        // TODO: dont block for this
        boolean configureSuccess = true;
        CameraAccessException pendingException = null;
        Surface input = null;
        try {
    
    
            // configure streams and then block until IDLE
            // 重点函数来了,在这里
            configureSuccess = configureStreamsChecked(inputConfig, outputConfigurations,
                    operatingMode, sessionParams, createSessionStartTime);
            if (configureSuccess == true && inputConfig != null) {
    
    
                input = mRemoteDevice.getInputSurface();
            }
        } catch (CameraAccessException e) {
    
    
            configureSuccess = false;
            pendingException = e;
            input = null;
            if (DEBUG) {
    
    
                Log.v(TAG, "createCaptureSession - failed with exception ", e);
            }
        }

        // Fire onConfigured if configureOutputs succeeded, fire onConfigureFailed otherwise.
        CameraCaptureSessionCore newSession = null;
        if (isConstrainedHighSpeed) {
    
    
            ArrayList<Surface> surfaces = new ArrayList<>(outputConfigurations.size());
            for (OutputConfiguration outConfig : outputConfigurations) {
    
    
                surfaces.add(outConfig.getSurface());
            }
            StreamConfigurationMap config =
                getCharacteristics().get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
            SurfaceUtils.checkConstrainedHighSpeedSurfaces(surfaces, /*fpsRange*/null, config);

            newSession = new CameraConstrainedHighSpeedCaptureSessionImpl(mNextSessionId++,
                    callback, executor, this, mDeviceExecutor, configureSuccess,
                    mCharacteristics);
        } else {
    
    
        	// 创建newSession
            newSession = new CameraCaptureSessionImpl(mNextSessionId++, input,
                    callback, executor, this, mDeviceExecutor, configureSuccess);
        }

        // TODO: wait until current session closes, then create the new session
        // 赋值给mCurrentSession
        mCurrentSession = newSession;

        if (pendingException != null) {
    
    
            throw pendingException;
        }

        mSessionStateCallback = mCurrentSession.getDeviceStateCallback();
    }
}


// 接下来该讲解configureStreamsChecked的实现了
// inputConfig: null
// outputs: surface的封装
// operatingMode: ICameraDeviceUser.NORMAL_MODE
// sessionParams: null
// createSessionStartTime: 开始时间
public boolean configureStreamsChecked(InputConfiguration inputConfig,
        List<OutputConfiguration> outputs, int operatingMode, CaptureRequest sessionParams,
        long createSessionStartTime)
                throws CameraAccessException {
    
    
    // Treat a null input the same an empty list
    if (outputs == null) {
    
    
        outputs = new ArrayList<OutputConfiguration>();
    }
    if (outputs.size() == 0 && inputConfig != null) {
    
    
        throw new IllegalArgumentException("cannot configure an input stream without " +
                "any output streams");
    }

	// inputConfig为空,这个略过,后面我们单独开章节讲解inputConfig不为空的场景
    checkInputConfiguration(inputConfig);

    boolean success = false;

    synchronized(mInterfaceLock) {
    
    
        checkIfCameraClosedOrInError();
        // Streams to create
        HashSet<OutputConfiguration> addSet = new HashSet<OutputConfiguration>(outputs);
        // Streams to delete
        List<Integer> deleteList = new ArrayList<Integer>();

        // Determine which streams need to be created, which to be deleted
        // mConfiguredOutputs: 现在已经有的配置
        for (int i = 0; i < mConfiguredOutputs.size(); ++i) {
    
    
            int streamId = mConfiguredOutputs.keyAt(i);
            OutputConfiguration outConfig = mConfiguredOutputs.valueAt(i);
			// 如果mConfiguredOutputs中的元素在传进来的列表outputs中不存在,或者
			// mConfiguredOutputs中的元素isDeferredConfiguration为true,
			// 则将mConfiguredOutputs中的元素放到deleteList
			// 比如传进来的是1、2, 原有的是2、3,则把3放到deleteList
            if (!outputs.contains(outConfig) || outConfig.isDeferredConfiguration()) {
    
    
                // Always delete the deferred output configuration when the session
                // is created, as the deferred output configuration doesn't have unique surface
                // related identifies.
                deleteList.add(streamId);
            } else {
    
    
            	// 否则说明mConfiguredOutputs中的元素在传进来的outputs中存在,存在则将其删除。
            	// 这样就可以避免重复创建stream,后面会有调用createStream的地方。
            	// 比如传进来的是1、2, 原有的是2,则把传进来的2删除,后面只需要创建1的stream就行了。
                addSet.remove(outConfig);  // Don't create a stream previously created
            }
        }
		// 这里执行的是mSessionStateCallback的onBusy方法,
		// 不过现在mSessionStateCallback还是空,要等
		// createCaptureSession函数执行完毕之后才会赋值,
		// 也就是那时这里的mCalOnBusy才会执行,
		// 因为此时会等待handler队列的mCallOnOpened执行完毕。
		// createCaptureSession是在onOpened回调里调用的。
        mDeviceExecutor.execute(mCallOnBusy);
        // 如果现在已经有请求,则停止
        stopRepeating();

        try {
    
    
        	// 里面调用了mRemoteDevice.waitUntilIdle();
        	// 这里的mRemoteDevice其实就是CameraDeviceClient
        	// 到这里就非常有必要梳理一下mRemoteDevice在各个进程和各个模块中的对应关系,
        	// 当我们后面看到mRemoteDevice的调用时就可以直接找到底层的函数调用
        	// mRemoteDevice-->CameraDeviceClient-->HidlCamera3Device or AidlCamera3Device
        	// --> Camera3Device
        	// 基本上安装上面的找到同名函数就行了,当然也有不同名的,按照上面的顺序去找函数的实现就可以。
            waitUntilIdle();

			// 这里我们发现走到CameraDeviceClient时只打印了一行日志就返回了
			/*
			binder::Status CameraDeviceClient::beginConfigure() {
			    // TODO: Implement this.
			    ATRACE_CALL();
			    ALOGV("%s: Not implemented yet.", __FUNCTION__);
			    return binder::Status::ok();
		    }
		    */
            mRemoteDevice.beginConfigure();

			// input为null, 这里我们先不管
            // reconfigure the input stream if the input configuration is different.
            InputConfiguration currentInputConfig = mConfiguredInput.getValue();
            if (inputConfig != currentInputConfig &&
                    (inputConfig == null || !inputConfig.equals(currentInputConfig))) {
    
    
                if (currentInputConfig != null) {
    
    
                    mRemoteDevice.deleteStream(mConfiguredInput.getKey());
                    mConfiguredInput = new SimpleEntry<Integer, InputConfiguration>(
                            REQUEST_ID_NONE, null);
                }
                if (inputConfig != null) {
    
    
                    int streamId = mRemoteDevice.createInputStream(inputConfig.getWidth(),
                            inputConfig.getHeight(), inputConfig.getFormat(),
                            inputConfig.isMultiResolution());
                    mConfiguredInput = new SimpleEntry<Integer, InputConfiguration>(
                            streamId, inputConfig);
                }
            }

			// 处理deleteList, 也就是旧的output,并且没有包含在新传进来的列表中。
			// 我们先看了下面的createStream流程再分析deleteStream流程
            // Delete all streams first (to free up HW resources)
            for (Integer streamId : deleteList) {
    
    
            	// 我们这里做个标记,后面根据序号来顺序分析
            	// 2. mRemoteDevice.deleteStream
                mRemoteDevice.deleteStream(streamId);
                mConfiguredOutputs.delete(streamId);
            }

            // Add all new streams
            // 现在,addSet中的就是需要createStream的了
            for (OutputConfiguration outConfig : outputs) {
    
    
                if (addSet.contains(outConfig)) {
    
    
                	// 1. mRemoteDevice.createStream
                    int streamId = mRemoteDevice.createStream(outConfig);
                    mConfiguredOutputs.put(streamId, outConfig);
                }
            }

            int offlineStreamIds[];
            // sessionParams = null
            if (sessionParams != null) {
    
    
                offlineStreamIds = mRemoteDevice.endConfigure(operatingMode,
                        sessionParams.getNativeCopy(), createSessionStartTime);
            } else {
    
    
            	// 3. mRemoteDevice.endConfigure
                offlineStreamIds = mRemoteDevice.endConfigure(operatingMode, null,
                        createSessionStartTime);
            }

            mOfflineSupport.clear();
            if ((offlineStreamIds != null) && (offlineStreamIds.length > 0)) {
    
    
                for (int offlineStreamId : offlineStreamIds) {
    
    
                    mOfflineSupport.add(offlineStreamId);
                }
            }

            success = true;
        } catch (IllegalArgumentException e) {
    
    
            // OK. camera service can reject stream config if it's not supported by HAL
            // This is only the result of a programmer misusing the camera2 api.
            Log.w(TAG, "Stream configuration failed due to: " + e.getMessage());
            return false;
        } catch (CameraAccessException e) {
    
    
            if (e.getReason() == CameraAccessException.CAMERA_IN_USE) {
    
    
                throw new IllegalStateException("The camera is currently busy." +
                        " You must wait until the previous operation completes.", e);
            }
            throw e;
        } finally {
    
    
            if (success && outputs.size() > 0) {
    
    
            	// 4. mCallOnIdle
                mDeviceExecutor.execute(mCallOnIdle);
            } else {
    
    
            	// 5. mCallOnUnconfigured
                // Always return to the 'unconfigured' state if we didn't hit a fatal error
                mDeviceExecutor.execute(mCallOnUnconfigured);
            }
        }
    }

    return success;
}

We conduct a step-by-step analysis based on the above sequence numbers.

1. mRemoteDevice.createStream

// mRemoteDevice对应cameraserver中的CameraDeviceClient
// frameworks/av/services/camera/libcameraservice/api2/CameraDeviceClient.cpp
binder::Status CameraDeviceClient::createStream(
        const hardware::camera2::params::OutputConfiguration &outputConfiguration,
        /*out*/
        int32_t* newStreamId) {
    
    
    ATRACE_CALL();

    binder::Status res;
    if (!(res = checkPidStatus(__FUNCTION__)).isOk()) return res;

    Mutex::Autolock icl(mBinderSerializationLock);

	// OutputConfiguration对象中有个mIsShared属性,默认为false,这样的话每个
	// OutputConfiguration对象就只包含一个Surface,bufferProducers的size为1
	// 这里要注意的是每个stream的最大Surface数量为4,也就是bufferProducers.size
	// 最大为4
	// constexpr int32_t MAX_SURFACES_PER_STREAM = 4;
    const std::vector<sp<IGraphicBufferProducer>>& bufferProducers =
            outputConfiguration.getGraphicBufferProducers();
    // numBufferProducers = 1,最大为4
    size_t numBufferProducers = bufferProducers.size();
    // 默认为false
    bool deferredConsumer = outputConfiguration.isDeferred();
    // 默认为false
    bool isShared = outputConfiguration.isShared();
    // 默认为null
    String8 physicalCameraId = String8(outputConfiguration.getPhysicalCameraId());
    // false
    bool deferredConsumerOnly = deferredConsumer && numBufferProducers == 0;
    // false
    bool isMultiResolution = outputConfiguration.isMultiResolution();
    // DynamicRangeProfiles.STANDARD
    int64_t dynamicRangeProfile = outputConfiguration.getDynamicRangeProfile();
    // CameraMetadata.SCALER_AVAILABLE_STREAM_USE_CASES_DEFAULT
    int64_t streamUseCase = outputConfiguration.getStreamUseCase();
    // TIMESTAMP_BASE_DEFAULT 0
    int timestampBase = outputConfiguration.getTimestampBase();
    // MIRROR_MODE_AUTO 0
    int mirrorMode = outputConfiguration.getMirrorMode();

	// 此处省略一些简单代码
	...

    std::vector<sp<Surface>> surfaces;
    std::vector<sp<IBinder>> binders;
    status_t err;

    // Create stream for deferred surface case.
    if (deferredConsumerOnly) {
    
    
        return createDeferredSurfaceStreamLocked(outputConfiguration, isShared, newStreamId);
    }

    OutputStreamInfo streamInfo;
    bool isStreamInfoValid = false;
    const std::vector<int32_t> &sensorPixelModesUsed =
            outputConfiguration.getSensorPixelModesUsed();
    for (auto& bufferProducer : bufferProducers) {
    
    
        // Don't create multiple streams for the same target surface
        sp<IBinder> binder = IInterface::asBinder(bufferProducer);
        ssize_t index = mStreamMap.indexOfKey(binder);
        if (index != NAME_NOT_FOUND) {
    
    
            String8 msg = String8::format("Camera %s: Surface already has a stream created for it "
                    "(ID %zd)", mCameraIdStr.string(), index);
            ALOGW("%s: %s", __FUNCTION__, msg.string());
            return STATUS_ERROR(CameraService::ERROR_ALREADY_EXISTS, msg.string());
        }

		// 这里用上面的一堆参数创建了一个Surface对象
        sp<Surface> surface;
        res = SessionConfigurationUtils::createSurfaceFromGbp(streamInfo,
                isStreamInfoValid, surface, bufferProducer, mCameraIdStr,
                mDevice->infoPhysical(physicalCameraId), sensorPixelModesUsed, dynamicRangeProfile,
                streamUseCase, timestampBase, mirrorMode);

        if (!res.isOk())
            return res;

        if (!isStreamInfoValid) {
    
    
            isStreamInfoValid = true;
        }
		
		// 放入列表中,一般情况下列表长度为1
        binders.push_back(IInterface::asBinder(bufferProducer));
        surfaces.push_back(surface);
    }

    // If mOverrideForPerfClass is true, do not fail createStream() for small
    // JPEG sizes because existing createSurfaceFromGbp() logic will find the
    // closest possible supported size.

    int streamId = camera3::CAMERA3_STREAM_ID_INVALID;
    std::vector<int> surfaceIds;
    bool isDepthCompositeStream =
            camera3::DepthCompositeStream::isDepthCompositeStream(surfaces[0]);
    bool isHeicCompisiteStream = camera3::HeicCompositeStream::isHeicCompositeStream(surfaces[0]);
    // false
    if (isDepthCompositeStream || isHeicCompisiteStream) {
    
    
        sp<CompositeStream> compositeStream;
        if (isDepthCompositeStream) {
    
    
            compositeStream = new camera3::DepthCompositeStream(mDevice, getRemoteCallback());
        } else {
    
    
            compositeStream = new camera3::HeicCompositeStream(mDevice, getRemoteCallback());
        }

        err = compositeStream->createStream(surfaces, deferredConsumer, streamInfo.width,
                streamInfo.height, streamInfo.format,
                static_cast<camera_stream_rotation_t>(outputConfiguration.getRotation()),
                &streamId, physicalCameraId, streamInfo.sensorPixelModesUsed, &surfaceIds,
                outputConfiguration.getSurfaceSetID(), isShared, isMultiResolution);
        if (err == OK) {
    
    
            Mutex::Autolock l(mCompositeLock);
            mCompositeStreamMap.add(IInterface::asBinder(surfaces[0]->getIGraphicBufferProducer()),
                    compositeStream);
        }
    } else {
    
    
    	// 再调用mDevice->createStream函数,看了下有19个参数,Google就不能优化一下吗,跟代码跟丢了怎么办
    	// 这里mDevice是HidlCamera3Device或者AidlCamera3Device, 二者都继承了Camera3Device,
    	// 最终会调用到Camera3Device.cpp里的createStream方法,这个我们下面详细再跟踪
        err = mDevice->createStream(surfaces, deferredConsumer, streamInfo.width,
                streamInfo.height, streamInfo.format, streamInfo.dataSpace,
                static_cast<camera_stream_rotation_t>(outputConfiguration.getRotation()),
                &streamId, physicalCameraId, streamInfo.sensorPixelModesUsed, &surfaceIds,
                outputConfiguration.getSurfaceSetID(), isShared, isMultiResolution,
                /*consumerUsage*/0, streamInfo.dynamicRangeProfile, streamInfo.streamUseCase,
                streamInfo.timestampBase, streamInfo.mirrorMode);
    }

    if (err != OK) {
    
    
        res = STATUS_ERROR_FMT(CameraService::ERROR_INVALID_OPERATION,
                "Camera %s: Error creating output stream (%d x %d, fmt %x, dataSpace %x): %s (%d)",
                mCameraIdStr.string(), streamInfo.width, streamInfo.height, streamInfo.format,
                streamInfo.dataSpace, strerror(-err), err);
    } else {
    
    
        int i = 0;
        for (auto& binder : binders) {
    
    
            ALOGV("%s: mStreamMap add binder %p streamId %d, surfaceId %d",
                    __FUNCTION__, binder.get(), streamId, i);
            // 创建成功,加入map
            mStreamMap.add(binder, StreamSurfaceId(streamId, surfaceIds[i]));
            i++;
        }

		// 更新列表
        mConfiguredOutputs.add(streamId, outputConfiguration);
        mStreamInfoMap[streamId] = streamInfo;

        ALOGV("%s: Camera %s: Successfully created a new stream ID %d for output surface"
                    " (%d x %d) with format 0x%x.",
                  __FUNCTION__, mCameraIdStr.string(), streamId, streamInfo.width,
                  streamInfo.height, streamInfo.format);
		// 省略部分代码
		.....

		// 返回streamId
        *newStreamId = streamId;
    }

    return res;
}

Next, let’s take a closer look at the specific implementation of the createStream method in Camera3Device.cpp.

status_t Camera3Device::createStream(const std::vector<sp<Surface>>& consumers,
        bool hasDeferredConsumer, uint32_t width, uint32_t height, int format,
        android_dataspace dataSpace, camera_stream_rotation_t rotation, int *id,
        const String8& physicalCameraId, const std::unordered_set<int32_t> &sensorPixelModesUsed,
        std::vector<int> *surfaceIds, int streamSetId, bool isShared, bool isMultiResolution,
        uint64_t consumerUsage, int64_t dynamicRangeProfile, int64_t streamUseCase,
        int timestampBase, int mirrorMode) {
    
    
    ATRACE_CALL();

	// 此处省略部分代码
	......
	// 下面根据不同的format来进行OutputStream的创建
	// 拍照时imagereader如果是yuv_420_888,则进入下面的if语句
    if (format == HAL_PIXEL_FORMAT_BLOB) {
    
    
        ssize_t blobBufferSize;
        if (dataSpace == HAL_DATASPACE_DEPTH) {
    
    
            blobBufferSize = getPointCloudBufferSize(infoPhysical(physicalCameraId));
            if (blobBufferSize <= 0) {
    
    
                SET_ERR_L("Invalid point cloud buffer size %zd", blobBufferSize);
                return BAD_VALUE;
            }
        } else if (dataSpace == static_cast<android_dataspace>(HAL_DATASPACE_JPEG_APP_SEGMENTS)) {
    
    
            blobBufferSize = width * height;
        } else {
    
    
            blobBufferSize = getJpegBufferSize(infoPhysical(physicalCameraId), width, height);
            if (blobBufferSize <= 0) {
    
    
                SET_ERR_L("Invalid jpeg buffer size %zd", blobBufferSize);
                return BAD_VALUE;
            }
        }
        ALOGV("new Camera3OutputStream......");
        newStream = new Camera3OutputStream(mNextStreamId, consumers[0],
                width, height, blobBufferSize, format, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, sensorPixelModesUsed, transport, streamSetId,
                isMultiResolution, dynamicRangeProfile, streamUseCase, mDeviceTimeBaseIsRealtime,
                timestampBase, mirrorMode);
    } else if (format == HAL_PIXEL_FORMAT_RAW_OPAQUE) {
    
    
        bool maxResolution =
                sensorPixelModesUsed.find(ANDROID_SENSOR_PIXEL_MODE_MAXIMUM_RESOLUTION) !=
                        sensorPixelModesUsed.end();
        ssize_t rawOpaqueBufferSize = getRawOpaqueBufferSize(infoPhysical(physicalCameraId), width,
                height, maxResolution);
        if (rawOpaqueBufferSize <= 0) {
    
    
            SET_ERR_L("Invalid RAW opaque buffer size %zd", rawOpaqueBufferSize);
            return BAD_VALUE;
        }
        newStream = new Camera3OutputStream(mNextStreamId, consumers[0],
                width, height, rawOpaqueBufferSize, format, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, sensorPixelModesUsed, transport, streamSetId,
                isMultiResolution, dynamicRangeProfile, streamUseCase, mDeviceTimeBaseIsRealtime,
                timestampBase, mirrorMode);
    } else if (isShared) {
    
    
        newStream = new Camera3SharedOutputStream(mNextStreamId, consumers,
                width, height, format, consumerUsage, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, sensorPixelModesUsed, transport, streamSetId,
                mUseHalBufManager, dynamicRangeProfile, streamUseCase, mDeviceTimeBaseIsRealtime,
                timestampBase, mirrorMode);
    } else if (consumers.size() == 0 && hasDeferredConsumer) {
    
    
        newStream = new Camera3OutputStream(mNextStreamId,
                width, height, format, consumerUsage, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, sensorPixelModesUsed, transport, streamSetId,
                isMultiResolution, dynamicRangeProfile, streamUseCase, mDeviceTimeBaseIsRealtime,
                timestampBase, mirrorMode);
    } else {
    
    
    	// 预览和imagereader设置为JPEG时走这里
        ALOGV("Camera3OutputStream...........");
        newStream = new Camera3OutputStream(mNextStreamId, consumers[0],
                width, height, format, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, sensorPixelModesUsed, transport, streamSetId,
                isMultiResolution, dynamicRangeProfile, streamUseCase, mDeviceTimeBaseIsRealtime,
                timestampBase, mirrorMode);
    }

	// 由上不难看出,除了isShared为true的情况,都是创建的Camera3OutputStream对象。
	// 而Camera3SharedOutputStream也是继承自Camera3OutputStream,
	// Camera3OutputStream的构造函数就不带大家看了,比较简单
	
    size_t consumerCount = consumers.size();
    for (size_t i = 0; i < consumerCount; i++) {
    
    
        int id = newStream->getSurfaceId(consumers[i]);
        if (id < 0) {
    
    
            SET_ERR_L("Invalid surface id");
            return BAD_VALUE;
        }
        if (surfaceIds != nullptr) {
    
    
            surfaceIds->push_back(id);
        }
    }

    newStream->setStatusTracker(mStatusTracker);

    newStream->setBufferManager(mBufferManager);

    newStream->setImageDumpMask(mImageDumpMask);

    res = mOutputStreams.add(mNextStreamId, newStream);
    if (res < 0) {
    
    
        SET_ERR_L("Can't add new stream to set: %s (%d)", strerror(-res), res);
        return res;
    }

    mSessionStatsBuilder.addStream(mNextStreamId);

    *id = mNextStreamId++;
    mNeedConfig = true;

	// 省略部分代码
	......
    ALOGV("Camera %s: Created new stream", mId.string());
    return OK;
}

To sum up, createStream did not go into Camera HAL in the end, but only created some intermediate variables in memory for subsequent operations.

2. mRemoteDevice.deleteStream

deleteStream and createStream are opposite operations. They just delete data in some data structures. You can read this part yourself.

3. mRemoteDevice.endConfigure

// endConfigure是createCaptureSession流程中的重点
binder::Status CameraDeviceClient::endConfigure(int operatingMode,
        const hardware::camera2::impl::CameraMetadataNative& sessionParams, int64_t startTimeMs,
        std::vector<int>* offlineStreamIds /*out*/) {
    
    
    ATRACE_CALL();
    ALOGV("%s: ending configure (%d input stream, %zu output surfaces)",
            __FUNCTION__, mInputStream.configured ? 1 : 0,
            mStreamMap.size());

	// 省略部分代码
	......
	// 不错,这一句是重点,其他代码我们都省略了
    status_t err = mDevice->configureStreams(sessionParams, operatingMode);
   // 省略部分代码
   ......

    return res;
}
status_t Camera3Device::configureStreams(const CameraMetadata& sessionParams, int operatingMode) {
    
    
    ATRACE_CALL();
    ALOGV("%s: E", __FUNCTION__);

    Mutex::Autolock il(mInterfaceLock);
    Mutex::Autolock l(mLock);

    // In case the client doesn't include any session parameter, try a
    // speculative configuration using the values from the last cached
    // default request.
    if (sessionParams.isEmpty() &&
            ((mLastTemplateId > 0) && (mLastTemplateId < CAMERA_TEMPLATE_COUNT)) &&
            (!mRequestTemplateCache[mLastTemplateId].isEmpty())) {
    
    
        ALOGV("%s: Speculative session param configuration with template id: %d", __func__,
                mLastTemplateId);
        return filterParamsAndConfigureLocked(mRequestTemplateCache[mLastTemplateId],
                operatingMode);
    }

    return filterParamsAndConfigureLocked(sessionParams, operatingMode);
}

status_t Camera3Device::filterParamsAndConfigureLocked(const CameraMetadata& sessionParams,
        int operatingMode) {
    
    
    //Filter out any incoming session parameters
    const CameraMetadata params(sessionParams);
    camera_metadata_entry_t availableSessionKeys = mDeviceInfo.find(
            ANDROID_REQUEST_AVAILABLE_SESSION_KEYS);
    CameraMetadata filteredParams(availableSessionKeys.count);
    camera_metadata_t *meta = const_cast<camera_metadata_t *>(
            filteredParams.getAndLock());
    set_camera_metadata_vendor_id(meta, mVendorTagId);
    filteredParams.unlock(meta);
    if (availableSessionKeys.count > 0) {
    
    
        for (size_t i = 0; i < availableSessionKeys.count; i++) {
    
    
            camera_metadata_ro_entry entry = params.find(
                    availableSessionKeys.data.i32[i]);
            if (entry.count > 0) {
    
    
                filteredParams.update(entry);
            }
        }
    }

    return configureStreamsLocked(operatingMode, filteredParams);
}

// 最后调用到configureStreamsLocked
status_t Camera3Device::configureStreamsLocked(int operatingMode,
        const CameraMetadata& sessionParams, bool notifyRequestThread) {
    
    
	// 省略部分代码
	......
    mGroupIdPhysicalCameraMap.clear();
    bool composerSurfacePresent = false;
    for (size_t i = 0; i < mOutputStreams.size(); i++) {
    
    

        // Don't configure bidi streams twice, nor add them twice to the list
        if (mOutputStreams[i].get() ==
            static_cast<Camera3StreamInterface*>(mInputStream.get())) {
    
    

            config.num_streams--;
            continue;
        }

        camera3::camera_stream_t *outputStream;
        // 这个outputStream就是之前createStream的
        outputStream = mOutputStreams[i]->startConfiguration();
        if (outputStream == NULL) {
    
    
            CLOGE("Can't start output stream configuration");
            cancelStreamsConfigurationLocked();
            return INVALID_OPERATION;
        }
        streams.add(outputStream);
        // 省略部分代码
        ......
    }

    config.streams = streams.editArray();

    // Do the HAL configuration; will potentially touch stream
    // max_buffers, usage, and priv fields, as well as data_space and format
    // fields for IMPLEMENTATION_DEFINED formats.

    const camera_metadata_t *sessionBuffer = sessionParams.getAndLock();
    // 这里继续调用mInterface->configureStreams,而mInterface又是谁呢?
    // 这里的mInterface是AidlHalInterface或者HidlHalInterface,本篇文章
    // 我们以HidlInterface为例进行讲解。HidlHalInterface的configureStreams实现里
    // 主要根据HidlSession的版本来调用对应的configureStreams,最终又调用到了
    // CameraDeviceSession::configureStreams_3_4_Impl
    res = mInterface->configureStreams(sessionBuffer, &config, bufferSizes);

	// 省略部分代码
	......
    return OK;
}

Next, let’s take a look at the implementation of CameraDeviceSession::configureStreams_3_4_Impl. We will ignore the unimportant code. We will explain the details of each step later when we have time.

// hardware/interfaces/camera/device/3.4/default/CameraDeviceSession.cpp
void CameraDeviceSession::configureStreams_3_4_Impl(
        const StreamConfiguration& requestedConfiguration,
        ICameraDeviceSession::configureStreams_3_4_cb _hidl_cb,
        uint32_t streamConfigCounter, bool useOverriddenFields)  {
    
    
    
    // 省略部分代码
    ......
    ATRACE_BEGIN("camera3->configure_streams");
    // 这里的mDevice是camera_device_t* 类型,我们其实直接去HAL的so库里找实现就可以了。
    status_t ret = mDevice->ops->configure_streams(mDevice, &stream_list);	
    // 省略部分代码
    ......

    _hidl_cb(status, outStreams);
    return;
}

// 我们参考Google的实现
// hardware/libhardware/modules/camera/3_4/camera.cpp
static int configure_streams(const camera3_device_t *dev,
        camera3_stream_configuration_t *stream_list)
{
    
    
    return camdev_to_camera(dev)->configureStreams(stream_list);
}

int Camera::configureStreams(camera3_stream_configuration_t *stream_config)
{
    
    
	......
    // Verify the set of streams in aggregate, and perform configuration if valid.
    int res = validateStreamConfiguration(stream_config);
    if (res) {
    
    
        ALOGE("%s:%d: Failed to validate stream set", __func__, mId);
    } else {
    
    
        // Set up all streams. Since they've been validated,
        // this should only result in fatal (-ENODEV) errors.
        // This occurs after validation to ensure that if there
        // is a non-fatal error, the stream configuration doesn't change states.
        // 重点在这里
        res = setupStreams(stream_config);
        if (res) {
    
    
            ALOGE("%s:%d: Failed to setup stream set", __func__, mId);
        }
    }
	......
    return res;
}

// hardware/libhardware/modules/camera/3_4/v4l2_camera.cpp
int V4L2Camera::setupStreams(camera3_stream_configuration_t* stream_config) {
    
    
  HAL_LOG_ENTER();

  ......
  // 这里调用v4l2的streamoff
  // Ensure the stream is off.
  int res = device_->StreamOff();
  if (res) {
    
    
    HAL_LOGE("Device failed to turn off stream for reconfiguration: %d.", res);
    return -ENODEV;
  }

  StreamFormat stream_format(format, width, height);
  uint32_t max_buffers = 0;
  // 这里面调用了v4l2的IoctlLocked(VIDIOC_S_FMT, &new_format)和
  // IoctlLocked(VIDIOC_REQBUFS, &req_buffers)
  res = device_->SetFormat(stream_format, &max_buffers);
  if (res) {
    
    
    HAL_LOGE("Failed to set device to correct format for stream: %d.", res);
    return -ENODEV;
  }
  ......
  return 0;
}

As can be seen from the above, createStreams finally sets parameters through HAL and applies for the corresponding buffer.

Create CaptureRequest object

1. createCaptureRequest

We began to analyze the creation of the CaptureRequest object, which also originated from the CameraDeviceImpl object.

// frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java
// 这个函数返回的是一个CaptureRequest.Builder对象。
// 参数templateType表示的是预览、拍照、或者录像等等。
@Override
public CaptureRequest.Builder createCaptureRequest(int templateType)
        throws CameraAccessException {
    
    
    synchronized(mInterfaceLock) {
    
    
        checkIfCameraClosedOrInError();

		// 创建camerametadata对象,通过binder调用返回,可见最后是通过
		// HAL进程创建返回的,事实也确实如此。
        CameraMetadataNative templatedRequest = null;

		// 这个调用经过CameraDeviceClient、HidlCamera3Device::HidlHalInterface、
		// CameraDeviceSession、camera_device_t最终又调用到了
		// hardware/libhardware/modules/camera/3_4/camera.cpp中的
		// construct_default_request_settings函数
        templatedRequest = mRemoteDevice.createDefaultRequest(templateType);

        // If app target SDK is older than O, or it's not a still capture template, enableZsl
        // must be false in the default request.
        if (mAppTargetSdkVersion < Build.VERSION_CODES.O ||
                templateType != TEMPLATE_STILL_CAPTURE) {
    
    
            overrideEnableZsl(templatedRequest, false);
        }

		// 以binder调用返回的CameraMetadataNative对象为参数构造CaptureRequest.Builder
		// 并返回。这个Builder的构造函数中会创建CaptureRequest对象mRequest
        CaptureRequest.Builder builder = new CaptureRequest.Builder(
                templatedRequest, /*reprocess*/false, CameraCaptureSession.SESSION_ID_NONE,
                getId(), /*physicalCameraIdSet*/ null);

        return builder;
    }
}

// hardware/libhardware/modules/camera/3_4/camera.cpp
static const camera_metadata_t *construct_default_request_settings(
        const camera3_device_t *dev, int type)
{
    
    
    return camdev_to_camera(dev)->constructDefaultRequestSettings(type);
}

const camera_metadata_t* Camera::constructDefaultRequestSettings(int type)
{
    
    
    ALOGV("%s:%d: type=%d", __func__, mId, type);

    if (!isValidTemplateType(type)) {
    
    
        ALOGE("%s:%d: Invalid template request type: %d", __func__, mId, type);
        return NULL;
    }

    if (!mTemplates[type]) {
    
    
        // Check if the device has the necessary features
        // for the requested template. If not, don't bother.
        if (!mStaticInfo->TemplateSupported(type)) {
    
    
            ALOGW("%s:%d: Camera does not support template type %d",
                  __func__, mId, type);
            return NULL;
        }

        // Initialize this template if it hasn't been initialized yet.
        // 重点在这里,创建了CameraMetadata对象,并且将type传入了进去
        std::unique_ptr<android::CameraMetadata> new_template =
            std::make_unique<android::CameraMetadata>();
        int res = initTemplate(type, new_template.get());
        if (res || !new_template) {
    
    
            ALOGE("%s:%d: Failed to generate template of type: %d",
                  __func__, mId, type);
            return NULL;
        }
        mTemplates[type] = std::move(new_template);
    }

    // The "locking" here only causes non-const methods to fail,
    // which is not a problem since the CameraMetadata being locked
    // is already const. Destructing automatically "unlocks".
    return mTemplates[type]->getAndLock();
}

2. CaptureRequest.Builder.addTarget

// frameworks/base/core/java/android/hardware/camera2/CaptureRequest.java
// 将Surface保存到了mSurfaceSet中,这个后面肯定会用到。
public void addTarget(@NonNull Surface outputTarget) {
    
    
    mRequest.mSurfaceSet.add(outputTarget);
}

3. CaptureRequest.Builder.build

// 以之前创建的CaptureRequest对象为参数重新构建一个新的对象
@NonNull
public CaptureRequest build() {
    
    
    return new CaptureRequest(mRequest);
}

setRepeatingRequest process analysis

If you perform this step, you will start to preview the picture or take a picture and the caller is the previously created CameraCaptureSessionImpl object.

// frameworks/base/core/java/android/hardware/camera2/impl/CameraCaptureSessionImpl.java
@Override
public int setRepeatingRequest(CaptureRequest request, CaptureCallback callback,
        Handler handler) throws CameraAccessException {
    
    
    checkRepeatingRequest(request);

    synchronized (mDeviceImpl.mInterfaceLock) {
    
    
        checkNotClosed();

        handler = checkHandler(handler, callback);

        if (DEBUG) {
    
    
            Log.v(TAG, mIdString + "setRepeatingRequest - request " + request + ", callback " +
                    callback + " handler" + " " + handler);
        }
		// mDeviceImpl是CameraDeviceImpl
        return addPendingSequence(mDeviceImpl.setRepeatingRequest(request,
                createCaptureCallbackProxy(handler, callback), mDeviceExecutor));
    }
}

// frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java
public int setRepeatingRequest(CaptureRequest request, CaptureCallback callback,
        Executor executor) throws CameraAccessException {
    
    
    // 将request放入列表中
    List<CaptureRequest> requestList = new ArrayList<CaptureRequest>();
    requestList.add(request);
    return submitCaptureRequest(requestList, callback, executor, /*streaming*/true);
}

// 这里的repeating为true表示预览,为false表示拍照
private int submitCaptureRequest(List<CaptureRequest> requestList, CaptureCallback callback,
        Executor executor, boolean repeating) throws CameraAccessException {
    
    

    // Need a valid executor, or current thread needs to have a looper, if
    // callback is valid
    executor = checkExecutor(executor, callback);

    synchronized(mInterfaceLock) {
    
    
        checkIfCameraClosedOrInError();

        // Make sure that there all requests have at least 1 surface; all surfaces are non-null;
        // 检查surface是否为空
        for (CaptureRequest request : requestList) {
    
    
            if (request.getTargets().isEmpty()) {
    
    
                throw new IllegalArgumentException(
                        "Each request must have at least one Surface target");
            }

            for (Surface surface : request.getTargets()) {
    
    
                if (surface == null) {
    
    
                    throw new IllegalArgumentException("Null Surface targets are not allowed");
                }
            }
        }
		// repeating为true,先stop
        if (repeating) {
    
    
            stopRepeating();
        }

        SubmitInfo requestInfo;

        CaptureRequest[] requestArray = requestList.toArray(new CaptureRequest[requestList.size()]);
        // Convert Surface to streamIdx and surfaceIdx
        for (CaptureRequest request : requestArray) {
    
    
        	// surface转为对应的streamid
            request.convertSurfaceToStreamId(mConfiguredOutputs);
        }

		// 这里是重点,详细代码不贴了,整体调用流程如下
		// mRemoteDevice.submitRequestList->
		// mDevice->setStreamingRequestList->
		// HidlCamera3Device::HidlHalInterface::processBatchCaptureRequests->
	    // CameraDeviceSession::processCaptureRequest_3_4->
	    // mDevice->ops->process_capture_request->
	    // Camera::processCaptureRequest->
	    // V4L2Camera::enqueueRequest->
	    // V4L2Wrapper::EnqueueRequest-> (IoctlLocked(VIDIOC_QBUF, &device_buffer))
	    // V4L2Wrapper::DequeueRequest->(IoctlLocked(VIDIOC_DQBUF, &buffer))
        requestInfo = mRemoteDevice.submitRequestList(requestArray, repeating);
        if (DEBUG) {
    
    
            Log.v(TAG, "last frame number " + requestInfo.getLastFrameNumber());
        }

        ......

        return requestInfo.getRequestId();
    }
}

Okay, so far, the basic calling process of the preview has been basically sorted out. What I know is that many of the details have not yet been made clear. There are many reasons. First, there is so much content that it is impossible to explain it all in one article. Also because I have limited knowledge and knowledge, I have not fully mastered the entire process. Therefore, I will exit many chapters later to continue to refine the preview process. I will try my best to explain every important detail and make it clear.

End of this chapter.

Guess you like

Origin blog.csdn.net/weixin_41678668/article/details/132655732