The use of custom camera Camera2 in Android

In the previous blog, I introduced the use of Camera in single shooting and continuous shooting. Everyone should understand. But if you run the example in the blog, or write a Demo based on the explanation, you will find that the photos taken are not very clear. One big reason is that the maximum resolution supported by our photos is 1920*1080. So is it very clear for the resolutions up to 2000 at every turn? So Camera2 was born after Android 5.0. And the display control that cooperates with Camera2 has also become TextureView. Below we will explain these two contents one by one. And at the end, an example of a custom camera made by Camera2 is given.

1. Texture View TextureView Common Methods

  • lockCanvas: Lock and get the canvas. 
  • unlockCanvasAndPost: Unlock and refresh the canvas.
  • setSurfaceTextureListener: Set the listener for the surface texture. This method is equivalent to the addCallback method of SurfaceHolder, which is used to monitor the state change event of the surface texture. The method parameter is the SurfaceTextureListener listener object, and four methods need to be rewritten.

onSurfaceTextureAvailable

Triggered when the surface texture is available, and operations such as opening the camera can be performed here.
onSurfaceTextureSizeChanged Triggered when the surface texture size changes.
onSurfaceTextureDestroyed Triggered when the surface texture is destroyed.
onSurfaceTextureUpdated Triggered when the surface texture is updated.
  • isAvailable: Determine whether the surface texture is available.
  • getSurfaceTexture: Get the surface texture.

2.SurfaceView method of washing white background

//下面两行设置背景为透明,因为SurfaceView默认背景是黑色
setZOrderOnTop(true);
mHolder.setFormat(PixelFormat.TRANSLUCENT);

3. New Features of Camera2

  • Supports full HD continuous shooting at 30 frames per second.
  • Supports using different settings between each frame.
  • Supports image output in native format.
  • Support zero delay shutter and movie speed shooting.
  • Supports manual control of the camera in other aspects, such as setting the level of noise cancellation.

4. Camera2 new architecture structure division

Camera2 has undergone a substantial transformation in its architecture. The original Camera class is split into multiple management classes, mainly including the following parts:

  • Camera Manager CameraManager
  • Camera Device CameraDevice
  • CameraCaptureSession
  • Image Reader ImageReader

5. Camera Manager CameraManager

<1>The role of camera manager

The camera manager is used to obtain a list of available cameras, open cameras, etc. The objects are obtained from the system service CAMERA_SERVICE.

<2> Common methods of camera manager

  • getCameraIdList: Get the camera list. Usually two records are returned, one is the rear camera and the other is the front camera.
  • getCameraCharacteristics: Get the parameter information of the camera. Including the support level of the camera, the size of the photo, etc.
  • openCamera: Open the specified camera, the first parameter is the id of the specified camera, and the second parameter is the device status listener. The listener needs to implement the onOpend method of the interface CameraDevice.StateCallback (inside the method, the createCaptureRequest method of the CameraDevice object is called).
  • setTorchMode: Turn the flash on or off without opening the camera. True means to turn on the flash, false means to turn off the flash.

<3>Check if the current mobile phone supports Camera2

// 从系统服务中获取相机管理器
CameraManager cm = (CameraManager) mContext.getSystemService(Context.CAMERA_SERVICE);
// 获取可用相机设备列表
CameraCharacteristics cc = cm.getCameraCharacteristics(cameraid);
// 检查相机硬件的支持级别
// CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_FULL表示完全支持
// CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED表示有限支持
// CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY表示遗留的
int level = cc.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL);
if (level == CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_FULL){
    ToastUtil.toastWord(mContext,"完全支持");
}else if (level == CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED){
    ToastUtil.toastWord(mContext,"有限支持");
}else if (level == CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY){
    ToastUtil.toastWord(mContext,"不建议使用");
}

6. Camera Device CameraDevice

<1>The role of camera equipment

The camera device is used to create a photo request, add a preview interface, create a photo session, and so on.

<2> Common methods of camera equipment

  • createCaptureRequest: Create a photo request, the second parameter is the session state listener, the listener needs to implement the onConfigured method of the session state callback interface CamereCaptureSession.StateCallback (the method then calls the setRepeatingRequest method of the CameraCaptureSession object to output the preview image to the screen) . The createCaptureRequest method returns a preview object of CaptureRequest.
  • close: Turn off the camera.

7. Camera Capture Session CameraCaptureSession

<1>The role of camera photo session

The camera photo session is used to set up a single shooting session (only one photo is taken at a time), a continuous shooting session (automatic and continuous shooting of multiple photos), etc.

<2> Common methods for camera photo session

  • getDevice: Get the camera device object of the session.
  • capture: Take a picture and output to the specified target. When the output target is a CaptureRequest object, it means that it is displayed on the screen; when the output target is an ImageReader object, it means that the photo is to be saved.
  • setRepeatingRequest: Set the continuous shooting request and output to the specified target. When the output target is a CaptureRequest object, it means that it is displayed on the screen; when the output target is an ImageReader object, it means that the photo is to be saved.
  • stopRepeating: Stop continuous shooting.

8. Image Reader ImageReader

<1>The role of image reader

The image reader is used to obtain and save the photo information. Once the image data is generated, the onImageAvailable method is triggered immediately.

<2> Common methods for image readers

  • getSurface: Obtain the surface object from which the image is read.
  • setOnImageAvailableListener: Set the available listener for image data. The listener needs to implement the onImageAvailable method of the interface ImageReader.OnImageAvailableListener.

9.Camera2 usage example

Camera2View.java

import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import android.Manifest;
import android.content.Context;
import android.content.pm.PackageManager;
import android.graphics.ImageFormat;
import android.graphics.SurfaceTexture;
import android.hardware.camera2.CameraAccessException;
import android.hardware.camera2.CameraCaptureSession;
import android.hardware.camera2.CameraCharacteristics;
import android.hardware.camera2.CameraDevice;
import android.hardware.camera2.CameraManager;
import android.hardware.camera2.CameraMetadata;
import android.hardware.camera2.CaptureRequest;
import android.hardware.camera2.params.StreamConfigurationMap;
import android.media.Image;
import android.media.ImageReader;
import android.media.ImageReader.OnImageAvailableListener;
import android.os.Build;
import android.os.Handler;
import android.os.HandlerThread;
import android.util.AttributeSet;
import android.util.Log;
import android.util.Size;
import android.view.Surface;
import android.view.TextureView;
import android.widget.Toast;
import androidx.annotation.RequiresApi;
import androidx.core.app.ActivityCompat;
import com.hao.baselib.utils.PathGetUtil;
import com.hao.baselib.utils.ToastUtil;

@RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
public class Camera2View extends TextureView {

    private static final String TAG = "Camera2View";
    private Context mContext; // 声明一个上下文对象
    private Handler mHandler;
    private HandlerThread mThreadHandler;
    private CaptureRequest.Builder mPreviewBuilder; // 声明一个拍照请求构建器对象
    private CameraCaptureSession mCameraSession; // 声明一个相机拍照会话对象
    private CameraDevice mCameraDevice; // 声明一个相机设备对象
    private ImageReader mImageReader; // 声明一个图像读取器对象
    private Size mPreViewSize; // 预览画面的尺寸
    private int mCameraType = CameraCharacteristics.LENS_FACING_FRONT; // 摄像头类型
    private int mTakeType = 0; // 拍摄类型。0为单拍,1为连拍

    public Camera2View(Context context) {
        this(context, null);
    }

    public Camera2View(Context context, AttributeSet attrs) {
        super(context, attrs);
        mContext = context;
        mThreadHandler = new HandlerThread("camera2");
        mThreadHandler.start();
        mHandler = new Handler(mThreadHandler.getLooper());
    }

    // 打开指定摄像头的相机视图
    public void open(int camera_type) {
        mCameraType = camera_type;
        // 设置表面纹理变更监听器
        setSurfaceTextureListener(mSurfacetextlistener);
    }

    private String mPhotoPath; // 照片的保存路径
    // 获取照片的保存路径
    public String getPhotoPath() {
        return mPhotoPath;
    }

    // 执行拍照动作
    public void takePicture() {
        Log.d(TAG, "正在拍照");
        mTakeType = 0;
        try {
            CaptureRequest.Builder builder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
            // 把图像读取器添加到预览目标
            builder.addTarget(mImageReader.getSurface());
            // 设置自动对焦模式
            builder.set(CaptureRequest.CONTROL_AF_MODE,
                    CaptureRequest.CONTROL_AF_MODE_AUTO);
            // 设置自动曝光模式
            builder.set(CaptureRequest.CONTROL_AF_MODE,
                    CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
            // 开始对焦
            builder.set(CaptureRequest.CONTROL_AF_TRIGGER,
                    CameraMetadata.CONTROL_AF_TRIGGER_START);
            // 设置照片的方向
            builder.set(CaptureRequest.JPEG_ORIENTATION, (mCameraType == CameraCharacteristics.LENS_FACING_FRONT) ? 90 : 270);
            // 拍照会话开始捕获相片
            mCameraSession.capture(builder.build(), null, mHandler);
        } catch (CameraAccessException e) {
            e.printStackTrace();
        }
    }

    private ArrayList<String> mShootingArray; // 连拍的相片保存路径列表
    // 获取连拍的相片保存路径列表
    public ArrayList<String> getShootingList() {
        Log.d(TAG, "mShootingArray.size()=" + mShootingArray.size());
        return mShootingArray;
    }

    // 开始连拍
    public void startShooting(int duration) {
        Log.d(TAG, "正在连拍");
        mTakeType = 1;
        mShootingArray = new ArrayList<String>();
        try {
            // 停止连拍
            mCameraSession.stopRepeating();
            // 把图像读取器添加到预览目标
            mPreviewBuilder.addTarget(mImageReader.getSurface());
            // 设置连拍请求。此时预览画面会同时发给手机屏幕和图像读取器
            mCameraSession.setRepeatingRequest(mPreviewBuilder.build(), null, mHandler);
            // duration小等于0时,表示持续连拍,此时外部要调用stopShooting方法来结束连拍
            if (duration > 0) {
                // 延迟若干秒后启动拍摄停止任务
                mHandler.postDelayed(mStop, duration);
            }
        } catch (CameraAccessException e) {
            e.printStackTrace();
        }
    }

    // 停止连拍
    public void stopShooting() {
        try {
            // 停止连拍
            mCameraSession.stopRepeating();
            // 移除图像读取器的预览目标
            mPreviewBuilder.removeTarget(mImageReader.getSurface());
            // 设置连拍请求。此时预览画面只会发给手机屏幕
            mCameraSession.setRepeatingRequest(mPreviewBuilder.build(), null, mHandler);
        } catch (CameraAccessException e) {
            e.printStackTrace();
        }
        Toast.makeText(mContext, "已完成连拍,按返回键回到上页查看照片。", Toast.LENGTH_SHORT).show();
    }

    // 定义一个拍摄停止任务
    private Runnable mStop = new Runnable() {
        @Override
        public void run() {
            stopShooting();
        }
    };

    // 打开相机
    private void openCamera() {
        // 从系统服务中获取相机管理器
        CameraManager cm = (CameraManager) mContext.getSystemService(Context.CAMERA_SERVICE);
        String cameraid = mCameraType + "";
        try {
            // 获取可用相机设备列表
            CameraCharacteristics cc = cm.getCameraCharacteristics(cameraid);
            // 检查相机硬件的支持级别
            // CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_FULL表示完全支持
            // CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED表示有限支持
            // CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY表示遗留的
            int level = cc.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL);
            if (level == CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_FULL){
                ToastUtil.toastWord(mContext,"完全支持");
            }else if (level == CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED){
                ToastUtil.toastWord(mContext,"有限支持");
            }else if (level == CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY){
                ToastUtil.toastWord(mContext,"不建议使用");
            }
            StreamConfigurationMap map = cc.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
            Size largest = Collections.max(Arrays.asList(map.getOutputSizes(ImageFormat.JPEG)), new CompareSizeByArea());
            // 获取预览画面的尺寸
            mPreViewSize = map.getOutputSizes(SurfaceTexture.class)[0];
            // 创建一个JPEG格式的图像读取器
            mImageReader = ImageReader.newInstance(largest.getWidth(), largest.getHeight(), ImageFormat.JPEG, 10);
            // 设置图像读取器的图像可用监听器,一旦捕捉到图像数据就会触发监听器的onImageAvailable方法
            mImageReader.setOnImageAvailableListener(onImageAvaiableListener, mHandler);
            if (ActivityCompat.checkSelfPermission(mContext, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
                // 开启摄像头
                cm.openCamera(cameraid, mDeviceStateCallback, mHandler);
            }
        } catch (CameraAccessException e) {
            e.printStackTrace();
        }
    }

    // 关闭相机
    private void closeCamera() {
        if (null != mCameraSession) {
            mCameraSession.close(); // 关闭相机拍摄会话
            mCameraSession = null;
        }
        if (null != mCameraDevice) {
            mCameraDevice.close(); // 关闭相机设备
            mCameraDevice = null;
        }
        if (null != mImageReader) {
            mImageReader.close(); // 关闭图像读取器
            mImageReader = null;
        }
    }

    // 定义一个表面纹理变更监听器。TextureView准备就绪后,立即开启相机
    private SurfaceTextureListener mSurfacetextlistener = new SurfaceTextureListener() {
        // 在纹理表面可用时触发
        public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
            openCamera(); // 打开相机
        }

        // 在纹理表面的尺寸发生改变时触发
        public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) {}

        // 在纹理表面销毁时触发
        public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
            closeCamera(); // 关闭相机
            return true;
        }

        // 在纹理表面更新时触发
        public void onSurfaceTextureUpdated(SurfaceTexture surface) {}
    };

    // 创建相机预览会话
    private void createCameraPreviewSession() {
        // 获取纹理视图的表面纹理
        SurfaceTexture texture = getSurfaceTexture();
        // 设置表面纹理的默认缓存尺寸
        texture.setDefaultBufferSize(mPreViewSize.getWidth(), mPreViewSize.getHeight());
        // 创建一个该表面纹理的表面对象
        Surface surface = new Surface(texture);
        try {
            mPreviewBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
            // 把纹理视图添加到预览目标
            mPreviewBuilder.addTarget(surface);
            // 设置自动对焦模式
            mPreviewBuilder.set(CaptureRequest.CONTROL_AF_MODE,
                    CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
            // 设置自动曝光模式
            mPreviewBuilder.set(CaptureRequest.CONTROL_AF_MODE,
                    CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
            // 开始对焦
            mPreviewBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER,
                    CameraMetadata.CONTROL_AF_TRIGGER_START);
            // 设置照片的方向
            mPreviewBuilder.set(CaptureRequest.JPEG_ORIENTATION, (mCameraType == CameraCharacteristics.LENS_FACING_FRONT) ? 90 : 270);
            // 创建一个相片捕获会话。此时预览画面显示在纹理视图上
            mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
                    mSessionStateCallback, mHandler);
        } catch (CameraAccessException e) {
            e.printStackTrace();
        }
    }

    // 相机准备就绪后,开启捕捉影像的会话
    private CameraDevice.StateCallback mDeviceStateCallback = new CameraDevice.StateCallback() {
        @Override
        public void onOpened(CameraDevice cameraDevice) {
            mCameraDevice = cameraDevice;
            createCameraPreviewSession();
        }

        @Override
        public void onDisconnected(CameraDevice cameraDevice) {
            cameraDevice.close();
            mCameraDevice = null;
        }

        @Override
        public void onError(CameraDevice cameraDevice, int error) {
            cameraDevice.close();
            mCameraDevice = null;
        }
    };

    // 影像配置就绪后,将预览画面呈现到手机屏幕上
    private CameraCaptureSession.StateCallback mSessionStateCallback = new CameraCaptureSession.StateCallback() {
        @Override
        public void onConfigured(CameraCaptureSession session) {
            try {
                Log.d(TAG, "onConfigured");
                mCameraSession = session;
                // 设置连拍请求。此时预览画面只会发给手机屏幕
                mCameraSession.setRepeatingRequest(mPreviewBuilder.build(), null, mHandler);
            } catch (CameraAccessException e) {
                e.printStackTrace();
            }
        }

        @Override
        public void onConfigureFailed(CameraCaptureSession session) {}
    };

    // 一旦有图像数据生成,立刻触发onImageAvailable事件
    private OnImageAvailableListener onImageAvaiableListener = new OnImageAvailableListener() {
        @Override
        public void onImageAvailable(ImageReader imageReader) {
            Log.d(TAG, "onImageAvailable");
            mHandler.post(new ImageSaver(imageReader.acquireNextImage()));
        }
    };

    // 定义一个图像保存任务
    private class ImageSaver implements Runnable {
        private Image mImage;

        public ImageSaver(Image reader) {
            mImage = reader;
        }

        @Override
        public void run() {
            // 获取本次拍摄的照片保存路径
            //获取本次拍摄的照片路径
            List<String> listPath = new ArrayList<>();
            listPath.add("myCamera");
            listPath.add("photos");
            String path = PathGetUtil.getLongwayPath(mContext, listPath);
            File fileDir = new File(path);
            if (!fileDir.exists()) {
                fileDir.mkdirs();
            }
            File filePic = new File(path, "ww" + System.currentTimeMillis() + ".jpg");
            if (!filePic.exists()) {
                try {
                    filePic.createNewFile();
                } catch (IOException e) {
                    e.printStackTrace();
                }
            }
            // 保存图片文件
            saveImage(filePic.getPath(), mImage.getPlanes()[0].getBuffer());
            //BitmapUtil.setPictureDegreeZero(path);
            if (mImage != null) {
                mImage.close();
            }
            if (mTakeType == 0) { // 单拍
                mPhotoPath = path;
            } else { // 连拍
                mShootingArray.add(path);
            }
            Log.d(TAG, "完成保存图片 path=" + path);
        }
    }

    public static void saveImage(String path, ByteBuffer byteBuffer){
        try {
            File file = new File(path);
            boolean append = false;
            FileChannel wChannel = new FileOutputStream(file, append).getChannel();
            wChannel.write(byteBuffer);
            wChannel.close();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    private class CompareSizeByArea implements java.util.Comparator<Size> {
        @Override
        public int compare(Size lhs, Size rhs) {
            return Long.signum((long) lhs.getWidth() * lhs.getHeight()
                    - (long) rhs.getWidth() * rhs.getHeight());
        }
    }
}

activity_main.xml

<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:layout_width="match_parent"
    android:layout_height="match_parent">
    <com.example.cameraself.Camera2View
        android:id="@+id/camera2"
        android:layout_width="match_parent"
        android:layout_height="match_parent"/>

    <LinearLayout
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:orientation="horizontal"
        android:layout_alignParentBottom="true">
        <Button
            android:id="@+id/one"
            android:layout_width="0dp"
            android:layout_weight="1"
            android:layout_height="wrap_content"
            android:layout_alignParentBottom="true"
            android:text="单拍"/>

        <Button
            android:id="@+id/two"
            android:layout_width="0dp"
            android:layout_weight="1"
            android:layout_height="wrap_content"
            android:layout_alignParentBottom="true"
            android:text="连拍"
            android:layout_marginLeft="10dp"/>
    </LinearLayout>

</RelativeLayout>

MainActivity.java

import android.hardware.camera2.CameraCharacteristics;
import android.os.Build;
import android.os.Handler;
import android.view.View;
import android.widget.Button;
import android.widget.Toast;
import com.hao.baselib.base.WaterPermissionActivity;

public class MainActivity extends WaterPermissionActivity<MainModel>
        implements MainCallback, View.OnClickListener {

    private Button one;
    private Button two;
    private Camera2View camera2;

    @Override
    protected MainModel getModelImp() {
        return new MainModel(this, this);
    }

    @Override
    protected int getContentLayoutId() {
        return R.layout.activity_main;
    }

    @Override
    protected void initWidget() {
        one = findViewById(R.id.one);
        two = findViewById(R.id.two);
        camera2 = findViewById(R.id.camera2);
        one.setOnClickListener(this);
        two.setOnClickListener(this);
        requestPermission(READ_EXTERNAL_STORAGE);
    }

    @Override
    protected void doSDRead() {
        requestPermission(WRITE_EXTERNAL_STORAGE);
    }

    @Override
    protected void doSDWrite() {
        requestPermission(CAMERA);
    }

    @Override
    protected void doCamera() {
        // 获取前一个页面传来的摄像头类型
//        int camera_type = CameraCharacteristics.LENS_FACING_BACK;
        int camera_type = CameraCharacteristics.LENS_FACING_FRONT;
        // 设置二代相机视图的摄像头类型
        if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) {
            camera2.open(camera_type);
        }
    }

    @Override
    public void onClick(View v) {
        switch (v.getId()) {
            case R.id.one:
                // 命令二代相机视图执行单拍操作
                if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) {
                    camera2.takePicture();
                }
                // 拍照需要完成对焦、图像捕获、图片保存等一系列动作,因而要留足时间给系统处理
                new Handler().postDelayed(new Runnable() {
                    @Override
                    public void run() {
                        Toast.makeText(MainActivity.this, "已完成拍照", Toast.LENGTH_SHORT).show();
                    }
                }, 1500);
                break;
            case R.id.two:
                // 命令二代相机视图执行连拍操作
                if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) {
                    camera2.startShooting(7000);
                }
                break;
        }
    }
}

In this way, we have achieved the purpose of using Camera2 to make a custom camera. In the actual test, it is found that Camera2 is indeed much clearer than the photos taken by Camera, and the resolution and final size of the photos have been improved, but it is still worse than the system camera, and it is still worse than the custom camera of mainstream applications such as WeChat. It is far away, because there are many algorithm optimizations, hardware support and other related content. But it can basically meet part of our development needs. So if we simply need to take pictures and upload pictures, it is recommended to call the camera function of the system. If we have to customize the camera interface, we will use this method. I will also study CameraX in Jetpack and write a blog post when I have the opportunity.

Guess you like

Origin blog.csdn.net/weixin_38322371/article/details/115175256