【Camera2】Android Camera2 Summary

1. Summary

This article is a review of Camera2; it mainly involves the following parts

  • Part1: Camera2 reference materials
  • Part2: Camera2 development history
  • Part3: Camera2 framework and usage process

This article can be read together with the Camera1 review article. For the completeness of the description, this article overlaps with the Camera1 review article.

In Part2, the difference between Camera1and will be introduced in detail Camera2, as well as the development history of Camera2

In Part3, the usage process related to Camera2 will be introduced in detail from the frame dimension and code dimension. But it only involves the use steps, detailed and perfect code development or explanation refer to Camera2源码分析and Camera2开源项目分析.

Related Camera2 series articles can refer to: Android Camera series article progress table

2. Reference materials

3. Development history

First of all, the following two concepts must be clarified:

  • HAL
  • API

Among different HALsums API, we can find specific differences of Camera1sums .Camera2

3.1 HAL:Hardware Abstraction Layer。

Here is the description in the Camera1 review article :

The HAL sits between the camera driver and the higher-level Android framework, and defines the interfaces you must implement so that your app can properly operate the camera hardware.

The HAL defines a standard interface for hardware vendors to implement, allowing Android to ignore lower-level driver implementations. HAL implementations are usually built into shared library modules (.so).

It can be seen that the HAL is mainly provided and updated by the camera supplier ; the relevant version history dynamics only need to be understood, and there is no need to delve into it. Here is just a brief summary. For more detailed understanding, please refer to:

HAL version illustrate
1.0 1. Android camera HAL (Android 4.0) [camera.h]; 2. Support android.hardware.CameraAPI
2.0 1. The initial version of the extended function HAL (Android 4.2) [camera2.h]; 2. Enough to implement the existing android.hardware.CameraAPI.
3.0 1. The first revision of the extended function HAL; 2. Important version changes, redesigning the input request and stream queue interfaces, etc.
3.1 1. Minor revisions to the extended function HAL; 2. configure_streams passes the consumer usage flag to the HAL, flush calls to discard all in-flight requests/buffers as fast as possible.
3.2 1. Minor revision of the extended function HAL; 2. Deprecated get_metadata_vendor_tag_opsand register_stream_buffers; re-enacted bidirectional and input stream specifications; changed the input buffer return path.
3.3 1. Minor revision of extended function HAL; 2. OPAQUE and YUV reprocessing API update; basic support for depth output buffer; add some fields.
3.4 1. Minor additions to supported metadata and changes to data_space support.
3.5 1.Android10.0;ICameraDevice,ICameraDeviceSession,ICameraDeviceCallback更新。
Android 8.0 1. Treble is introduced; 2. Vendor camera HAL implementation must be binding .
Android 9.0 1.HAL 3.3 adds some keys and metadata tags; 2.HAL 3.4 updates ICameraDeviceSession and ICameraDeviceCallback.
Android 10.0 1. HAL 3.4 adds image format, metadata flag, new function, data flow configuration, etc.;

3.2 Difference between Camera1 and Camera2

Camera1and Camera2respectively correspond to , ie 相机API1and 相机API2.

3.2.1 Camera API1

Android 5.0 has been deprecated Camera API1, and the new platform focuses on development Camera API2and Camera API1will be gradually eliminated. But the phase-out period will be long, and new Android versions will continue to support Camera API1the app . This is why the Camera1 framework can be used as an application support framework in subsequent Android SDKs.

3.2.2 Camera API2

  • Camera API2The framework provides apps with closer-to-low-level camera controls, including efficient zero-copy burst shooting/streaming and per-frame controls for exposure, gain, white balance gain, color conversion, denoising, sharpening, and more .
  • Android 5.0 and higher versions provide camera API2; that is, Android 5.0 and higher versions support ; but Android 5.0 and higher versions may not support all camera API2 functions.Camera2
  • Camera2Divided into the following support levels
Camera2 support level illustrate
LEGACY The functions used are roughly the same as API1. That is, the bottom layer converts the camera API2 call to the camera API1 call
LIMITED Some camera API2 functions, and camera HAL 3.2 or higher.
FULL Support all main functions of API2, and camera HAL 3.2 or higher.
LEVEL_3 Supports YUV reprocessing and RAW image capture, as well as other output stream configurations.
EXTERNAL Similar to LIMITED devices, this level is used for external cameras.

Considering the cost of compatibility, when actually using the functions of Camera2, it will not directly support Camera2 in Android5.0. Instead, Camera2 is supported in the Android SDK corresponding to a higher version of HAL3.2 . How long ago did the Android SDK start to support Camera2? It will 【3.3】be revealed in .

3.2.3 Camera API version history

Here is just a brief summary, for details, please refer to
the camera API version history

API version illustrate
1.0 Implement HAL1.0 namely Camera1
2.0 Implement HAL2.0 or above, that is, Camera2
2.1 Added support for asynchronous callbacks from the camera HAL module to the framework
2.2 Added module vendor tag support
2.3 Added support for opening legacy camera HAL devices.
2.4 API changes; 1. Flashlight mode support; 2. External camera (such as USB hot-swappable camera) support; 3. Camera arbitration prompt; 4. Module initialization method.

3.3 HAL1 and HAL3

3.3.1 HAL1

HAL1 corresponds to Camera1, which is currently deprecated. The architecture of HAL1 is as follows:
insert image description here
here, the official document clearly puts forward.

注意:由于 Camera HAL1 已弃用,建议在搭载 Android 9 或更高版本的设备上使用 Camera HAL3。

So in general, it is better 9.0to support above . Camera2And Camera2in the support level of , it is also marked FULLand Level3the level is recommended to use HAL3.2above.

3.3.2 HAL3

【3.2】The functional differences between Camera1and are explained here . The architecture diagram is pasted here. For detailed theoretical knowledge, please refer to the article: [Android Camera] Android Camera Architecture Design Detailed ExplanationCamera2HAL3

insert image description here

4. Framework and use process

It is mainly divided into two dimensions, framework dimension and code level dimension

4.1 Frame Dimensions

Including the following parts

  • Camera2 camera model
  • Camera HAL3 core operation model
  • Summary of HAL operations
  • Camera sensor
  • Android Camera API steps

4.1.1 Camera2 camera model

insert image description here
The figure above contains the following three layers:

4.1.1.1 Camera-Using Application

module illustrate
CameraManager.AvailabilityCallback The application layer monitors the status of the Camera device through this callback
CameraCharacteristics Camera device information and related setting classes
Output Stream Destinations Output stream configuration: configurable preview, photo taking, recording and other scenarios
CallBack Some callback methods such as: CameraCaptureSession.CaptureCallBack

4.1.1.2 Camera2 API

module illustrate
CameraManager The application layer connects to the device and obtains the camera information through this type of call
CameraDevice Provides for creating paintings and provides application layers for creating frame captures
CameraCaptureSession Specific frame captures are translated into output stream information
Configured Outputs output stream queue

4.1.1.3 Camera Device Hardware

Hardware level: including lens control, Camera Sensor, flash settings, post-processing, and control operations.

4.1.2 Camera HAL3 core operation model

The core operation in the camera model is to output the frame capture request to the corresponding output stream. As follows:
insert image description here
The API models the camera subsystem as a pipeline that CaptureRequestconverts incoming frame capture requests ( ) into frames. The request contains all configuration information about the capture and processing of the frame , including resolution and pixel format ; manual sensor , lens, and flash controls ; 3A modes of operation ; RAW to YUV processing controls ; statistics generation , and more.

  1. structureCaptureRequest
  2. singleCapture
  3. request
  4. Output1: configured output surfaces [preview, photo, etc.]
  5. Output2:onCaptureComplete -> CaptureResult

The core processing flow is shown in the figure above and the above 5 steps. This is just a single request operation. Camera2A repeated Request operation process is also provided. Read on.

4.1.3 HAL operation summary

insert image description here
如上图,包含如下几个流程:

  1. 输入和构造CaptureRequest
  2. Camera HAL经过配置、帧捕获和后处理
  3. 输出到输出流配置里。

在图片里可以看到CaptureRequest分为2种:

  • 单次Capture,直接输出
  • 重复Capture,重新由4转移到6

几点说明

  1. 捕获的异步请求来自于框架。
  2. HAL 设备必须按顺序处理请求。对于每个请求,均生成输出结果元数据以及一个或多个输出图像缓冲区。
  3. 请求和结果以及后续请求引用的信息流遵守先进先出规则。
  4. 指定请求的所有输出的时间戳必须完全相同,以便框架可以根据需要将它们匹配在一起。
  5. 所有捕获配置和状态(不包括 3A 例程)都包含在请求和结果中。

4.1.4 Camera传感器

本小节具体介绍Camera硬件层面即Camera Sensor
insert image description here
如上图为:图像信号处理器(ISP,也称为相机传感器)包含了如下几个模块:

  1. 3A算法;
  2. 输出流分辨率和大小以及格式的处理
  3. 图像处理:Hot Pixel Correction -> Demosaic -> Noise Reduction -> ShadingCorrection -> Geometric Correction -> Color Correction -> Tone Curve Adjustment -> Edge Enhancement。

整个ISP处理流程的简介看参看如下2篇文章:

【Android Camera】1.Camera理论知识和基本原理

这里总结为如下2张图:
insert image description here

insert image description here

4.1.5 Android Camera API步骤

  1. 监听和枚举相机设备。
  2. 打开设备并连接监听器。
  3. 配置目标使用情形的输出(如静态捕获、录制等)。
  4. 为目标使用情形创建请求。
  5. 捕获/重复请求和连拍。
  6. 接收结果元数据和图片数据。
  7. 切换使用情形时,返回到第 3 步。

4.2 代码维度

这里以【4.1.5里的流程进行阐述】

4.2.1 监听和枚举相机设备。

4.2.1.1监听相机设备

可通过CameraManager的如下方法:

   public void registerAvailabilityCallback(@NonNull AvailabilityCallback callback,
            @Nullable Handler handler) {
    
    
        CameraManagerGlobal.get().registerAvailabilityCallback(callback,
                CameraDeviceImpl.checkAndWrapHandler(handler));
    }

AvailabilityCallbackContains the following 3 methods:

 public void onCameraAvailable(@NonNull String cameraId) {
    
    
        }

 public void onCameraUnavailable(@NonNull String cameraId) {
    
    
        }

   public void onCameraAccessPrioritiesChanged() {
    
    
        }

4.2.1.2 Enumerating Camera Devices

Now the camera device has 2 or more camera devices: the camera device can be enumerated by the following method

val cameraIdList = cameraManager.cameraIdList // may be empty
val characteristics = cameraManager.getCameraCharacteristics(cameraId)
val cameraLensFacing = characteristics.get(CameraCharacteristics.LENS_FACING)
val cameraCapabilities = characteristics.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)
val cameraCompatible = cameraCapabilities?.contains(
        CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_BACKWARD_COMPATIBLE) ?: false

The following methods provide access to the front or rear cameraId

fun getFirstCameraIdFacing(cameraManager: CameraManager,
                           facing: Int = CameraMetadata.LENS_FACING_BACK): String? {
    
    
    // Get list of all compatible cameras
    val cameraIds = cameraManager.cameraIdList.filter {
    
    
        val characteristics = cameraManager.getCameraCharacteristics(it)
        val capabilities = characteristics.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)
        capabilities?.contains(
                CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_BACKWARD_COMPATIBLE) ?: false
    }

    // Iterate over the list of cameras and return the first one matching desired
    // lens-facing configuration
    cameraIds.forEach {
    
    
        val characteristics = cameraManager.getCameraCharacteristics(it)
        if (characteristics.get(CameraCharacteristics.LENS_FACING) == facing) {
    
    
            return it
        }
    }
    // If no camera matched desired orientation, return the first one from the list
    return cameraIds.firstOrNull()
}

Other related methods:

fun filterCompatibleCameras(cameraIds: Array<String>,
                            cameraManager: CameraManager): List<String> {
    
    
    return cameraIds.filter {
    
    
        val characteristics = cameraManager.getCameraCharacteristics(it)
        characteristics.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)?.contains(
                CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_BACKWARD_COMPATIBLE) ?: false
    }
}

fun filterCameraIdsFacing(cameraIds: List<String>, cameraManager: CameraManager,
                          facing: Int): List<String> {
    
    
    return cameraIds.filter {
    
    
        val characteristics = cameraManager.getCameraCharacteristics(it)
        characteristics.get(CameraCharacteristics.LENS_FACING) == facing
    }
}

fun getNextCameraId(cameraManager: CameraManager, currCameraId: String? = null): String? {
    
    
    // Get all front, back and external cameras in 3 separate lists
    val cameraIds = filterCompatibleCameras(cameraManager.cameraIdList, cameraManager)
    val backCameras = filterCameraIdsFacing(
            cameraIds, cameraManager, CameraMetadata.LENS_FACING_BACK)
    val frontCameras = filterCameraIdsFacing(
            cameraIds, cameraManager, CameraMetadata.LENS_FACING_FRONT)
    val externalCameras = filterCameraIdsFacing(
            cameraIds, cameraManager, CameraMetadata.LENS_FACING_EXTERNAL)

    // The recommended order of iteration is: all external, first back, first front
    val allCameras = (externalCameras + listOf(
            backCameras.firstOrNull(), frontCameras.firstOrNull())).filterNotNull()

    // Get the index of the currently selected camera in the list
    val cameraIndex = allCameras.indexOf(currCameraId)

    // The selected camera may not be on the list, for example it could be an
    // external camera that has been removed by the user
    return if (cameraIndex == -1) {
    
    
        // Return the first camera from the list
        allCameras.getOrNull(0)
    } else {
    
    
        // Return the next camera from the list, wrap around if necessary
        allCameras.getOrNull((cameraIndex + 1) % allCameras.size)
    }
}

4.2.2 Turn on the device and connect the listener.

By the following method:

    public void openCamera(@NonNull String cameraId,
            @NonNull final CameraDevice.StateCallback callback, @Nullable Handler handler)
            throws CameraAccessException {
    
    

        openCameraForUid(cameraId, callback, CameraDeviceImpl.checkAndWrapHandler(handler),
                USE_CALLING_UID);
    }

Parameter description :

  • cameraId -> 【4.2.1】
  • CameraDevice.StateCallbackas follows:
    public static abstract class StateCallback {
    
    
        public abstract void onOpened(@NonNull CameraDevice camera); // Must implement

        public void onClosed(@NonNull CameraDevice camera) {
    
    
            // Default empty implementation
        }

        public abstract void onDisconnected(@NonNull CameraDevice camera); // Must implement

        public abstract void onError(@NonNull CameraDevice camera,
                @ErrorCode int error); // Must implement
    }

4.2.3 Configure the output of the target use case (such as static capture, recording, etc.) and create requests for the target use case. .

// Retrieve the target surfaces, which could be coming from a number of places:
// 1. SurfaceView, if you want to display the image directly to the user
// 2. ImageReader, if you want to read each frame or perform frame-by-frame analysis
// 3. OpenGL Texture or TextureView, although discouraged for maintainability reasons
// 4. RenderScript.Allocation, if you want to do parallel processing
val surfaceView = findViewById<SurfaceView>(...)
val imageReader = ImageReader.newInstance(...)

// Remember to call this only *after* SurfaceHolder.Callback.surfaceCreated()
val previewSurface = surfaceView.holder.surface
val imReaderSurface = imageReader.surface
val targets = listOf(previewSurface, imReaderSurface)

// Create a capture session using the predefined targets; this also involves defining the
// session state callback to be notified of when the session is ready
cameraDevice.createCaptureSession(targets, object: CameraCaptureSession.StateCallback() {
    
    
  override fun onConfigured(session: CameraCaptureSession) {
    
    
    // Do something with `session`
  }
  // Omitting for brevity...
  override fun onConfigureFailed(session: CameraCaptureSession) = Unit
}, null)  // null can be replaced with a Handler, falls back to current thread's Looper

4.2.4 Capture/Repeat Request and Burst and Toggle Single and Multiple Captures.

val session: CameraCaptureSession = ...  // from CameraCaptureSession.StateCallback

// Create the repeating request and dispatch it
val repeatingRequest = session.device.createCaptureRequest(
        CameraDevice.TEMPLATE_PREVIEW)
repeatingRequest.addTarget(previewSurface)
session.setRepeatingRequest(repeatingRequest.build(), null, null)

// Some time later...

// Create the single request and dispatch it
// NOTE: This may disrupt the ongoing repeating request momentarily
val singleRequest = session.device.createCaptureRequest(
        CameraDevice.TEMPLATE_STILL_CAPTURE)
singleRequest.addTarget(imReaderSurface)
session.capture(singleRequest.build(), null, null)

The difference between single and multiple is as follows:
insert image description here

4.2.5 Receive result metadata and image data.

    public abstract int setRepeatingRequest(@NonNull CaptureRequest request,
            @Nullable CaptureCallback listener, @Nullable Handler handler)
            throws CameraAccessException;
  • CaptureRequest->【4.2.4】
  • CaptureCallback listeneras follows:
 public static abstract class CaptureCallback {
    
    
        public static final int NO_FRAMES_CAPTURED = -1;

        public void onCaptureStarted(@NonNull CameraCaptureSession session,
                @NonNull CaptureRequest request, long timestamp, long frameNumber) {
    
    
            // default empty implementation
        }
        public void onCapturePartial(CameraCaptureSession session,
                CaptureRequest request, CaptureResult result) {
    
    
            // default empty implementation
        }
        public void onCaptureProgressed(@NonNull CameraCaptureSession session,
                @NonNull CaptureRequest request, @NonNull CaptureResult partialResult) {
    
    
            // default empty implementation
        }
        public void onCaptureCompleted(@NonNull CameraCaptureSession session,
                @NonNull CaptureRequest request, @NonNull TotalCaptureResult result) {
    
    
            // default empty implementation
        }
        public void onCaptureFailed(@NonNull CameraCaptureSession session,
                @NonNull CaptureRequest request, @NonNull CaptureFailure failure) {
    
    
            // default empty implementation
        }
        public void onCaptureSequenceCompleted(@NonNull CameraCaptureSession session,
                int sequenceId, long frameNumber) {
    
    
            // default empty implementation
        }
        public void onCaptureSequenceAborted(@NonNull CameraCaptureSession session,
                int sequenceId) {
    
    
            // default empty implementation
        }
        public void onCaptureBufferLost(@NonNull CameraCaptureSession session,
                @NonNull CaptureRequest request, @NonNull Surface target, long frameNumber) {
    
    
            // default empty implementation
        }
    }

END

If you continue to read related articles, please refer to: Android Camera series article progress table. You are also welcome to bookmark the Android Camera series articles to learn and make progress together.

If you have any questions or errata, please comment or contact me via the following email
[email protected]

Guess you like

Origin blog.csdn.net/Scott_S/article/details/122143560