【Android Camera2】源码分析 —— Camera2Basic官方源码分析

一、简介

本片文章主要针对官方提供的Camera2实现做简要的源码分析,文末附上Camera2核心的流程代码。

相关文章:

  1. Android Camera系列文章目录索引汇总
  2. Android Camera2 综述
  3. Camera2开源项目源码分析汇总

官方Demo传送门

二、核心源码分析:

分为如下几个部分拆解分析:

  1. CameraView UI相关
  2. 初始化和参数设置
  3. 拍照

2.1 CameraView相关源码

源码里使用的是SurfaceView。CameraView实现的几种方式,可参考以往文章:
初始化基本框架和CameraView几种实现方式及其伪代码

fragmentCameraBinding.viewFinder.holder.addCallback(object : SurfaceHolder.Callback {
    
    
    override fun surfaceDestroyed(holder: SurfaceHolder) = Unit

    override fun surfaceChanged(
            holder: SurfaceHolder,
            format: Int,
            width: Int,
            height: Int) = Unit

    override fun surfaceCreated(holder: SurfaceHolder) {
    
    
        // Selects appropriate preview size and configures view finder
        val previewSize = getPreviewOutputSize(
            fragmentCameraBinding.viewFinder.display,
            characteristics,
            SurfaceHolder::class.java
        )
        Log.d(TAG, "View finder size: ${
      
      fragmentCameraBinding.viewFinder.width} x ${
      
      fragmentCameraBinding.viewFinder.height}")
        Log.d(TAG, "Selected preview size: $previewSize")
        fragmentCameraBinding.viewFinder.setAspectRatio(
            previewSize.width,
            previewSize.height
        )

        // To ensure that size is set, initialize camera in the view's thread
        view.post {
    
     initializeCamera() }
    }
})

/**
 * Returns the largest available PREVIEW size. For more information, see:
 * https://d.android.com/reference/android/hardware/camera2/CameraDevice and
 * https://developer.android.com/reference/android/hardware/camera2/params/StreamConfigurationMap
 */
fun <T>getPreviewOutputSize(
        display: Display,
        characteristics: CameraCharacteristics,
        targetClass: Class<T>,
        format: Int? = null
): Size {
    
    

    // Find which is smaller: screen or 1080p
    val screenSize = getDisplaySmartSize(display)
    val hdScreen = screenSize.long >= SIZE_1080P.long || screenSize.short >= SIZE_1080P.short
    val maxSize = if (hdScreen) SIZE_1080P else screenSize

    // If image format is provided, use it to determine supported sizes; else use target class
    val config = characteristics.get(
            CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!
    if (format == null)
        assert(StreamConfigurationMap.isOutputSupportedFor(targetClass))
    else
        assert(config.isOutputSupportedFor(format))
    val allSizes = if (format == null)
        config.getOutputSizes(targetClass) else config.getOutputSizes(format)

    // Get available sizes and sort them by area from largest to smallest
    val validSizes = allSizes
            .sortedWith(compareBy {
    
     it.height * it.width })
            .map {
    
     SmartSize(it.width, it.height) }.reversed()

    // Then, get the largest output size that is smaller or equal than our max size
    return validSizes.first {
    
     it.long <= maxSize.long && it.short <= maxSize.short }.size
}

分析:

  1. fragmentCameraBinding.viewFinder为SurfaceView,这里添加了SurfaceHolder的回调
  2. fun surfaceCreated(holder: SurfaceHolder)函数回调里获取CameraView UI的宽和高。通过getPreviewOutputSize()方法函数匹配相机返回的previewSize。
    characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputSizes(format)
  3. 调用initializeCamera()初始化打开相机代码逻辑

2.2 初始化

/**
 * Begin all camera operations in a coroutine in the main thread. This function:
 * - Opens the camera
 * - Configures the camera session
 * - Starts the preview by dispatching a repeating capture request
 */
private fun initializeCamera() = lifecycleScope.launch(Dispatchers.Main) {
    
    
    // Open the selected camera
    camera = openCamera(cameraManager, args.cameraId, cameraHandler)

    // Initialize an image reader which will be used to capture still photos
    val size = characteristics.get(
            CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!
            .getOutputSizes(args.pixelFormat).maxByOrNull {
    
     it.height * it.width }!!
    imageReader = ImageReader.newInstance(
            size.width, size.height, args.pixelFormat, IMAGE_BUFFER_SIZE)

    // Creates list of Surfaces where the camera will output frames
    val targets = listOf(fragmentCameraBinding.viewFinder.holder.surface, imageReader.surface)

    // Start a capture session using our open camera and list of Surfaces where frames will go
    session = createCaptureSession(camera, targets, cameraHandler)

    val captureRequest = camera.createCaptureRequest(
            CameraDevice.TEMPLATE_PREVIEW).apply {
    
     addTarget(fragmentCameraBinding.viewFinder.holder.surface) }

    // This will keep sending the capture request as frequently as possible until the
    // session is torn down or session.stopRepeating() is called
    session.setRepeatingRequest(captureRequest.build(), null, cameraHandler)

}

/** Opens the camera and returns the opened device (as the result of the suspend coroutine) */
@SuppressLint("MissingPermission")
private suspend fun openCamera(
       manager: CameraManager,
       cameraId: String,
       handler: Handler? = null
): CameraDevice = suspendCancellableCoroutine {
    
     cont ->
   manager.openCamera(cameraId, object : CameraDevice.StateCallback() {
    
    
       override fun onOpened(device: CameraDevice) = cont.resume(device)

       override fun onDisconnected(device: CameraDevice) {
    
    
           Log.w(TAG, "Camera $cameraId has been disconnected")
           requireActivity().finish()
       }

       override fun onError(device: CameraDevice, error: Int) {
    
    
           val msg = when (error) {
    
    
               ERROR_CAMERA_DEVICE -> "Fatal (device)"
               ERROR_CAMERA_DISABLED -> "Device policy"
               ERROR_CAMERA_IN_USE -> "Camera in use"
               ERROR_CAMERA_SERVICE -> "Fatal (service)"
               ERROR_MAX_CAMERAS_IN_USE -> "Maximum cameras in use"
               else -> "Unknown"
           }
           val exc = RuntimeException("Camera $cameraId error: ($error) $msg")
           Log.e(TAG, exc.message, exc)
           if (cont.isActive) cont.resumeWithException(exc)
       }
   }, handler)
}

分析

  1. Opens the camera 保存CameraDevice变量
  2. Configures the camera session,设置frame stream流目标1. ImageReader用于捕获拍照帧;2. surfaceHolder用于预览
  3. createSession,createRequest,sendRequest
session = createCaptureSession(camera, targets, cameraHandler)
val captureRequest = camera.createCaptureRequest(
        CameraDevice.TEMPLATE_PREVIEW).apply {
    
     addTarget(fragmentCameraBinding.viewFinder.holder.surface) }
// This will keep sending the capture request as frequently as possible until the
// session is torn down or session.stopRepeating() is called
session.setRepeatingRequest(captureRequest.build(), null, cameraHandler)

2.3 拍照

/**
 * Helper function used to capture a still image using the [CameraDevice.TEMPLATE_STILL_CAPTURE]
 * template. It performs synchronization between the [CaptureResult] and the [Image] resulting
 * from the single capture, and outputs a [CombinedCaptureResult] object.
 */
private suspend fun takePhoto():
        CombinedCaptureResult = suspendCoroutine {
    
     cont ->

    // Flush any images left in the image reader
    @Suppress("ControlFlowWithEmptyBody")
    while (imageReader.acquireNextImage() != null) {
    
    
    }

    // Start a new image queue
    val imageQueue = ArrayBlockingQueue<Image>(IMAGE_BUFFER_SIZE)
    imageReader.setOnImageAvailableListener({
    
     reader ->
        val image = reader.acquireNextImage()
        Log.d(TAG, "Image available in queue: ${
      
      image.timestamp}")
        imageQueue.add(image)
    }, imageReaderHandler)

    val captureRequest = session.device.createCaptureRequest(
            CameraDevice.TEMPLATE_STILL_CAPTURE).apply {
    
     addTarget(imageReader.surface) }
    session.capture(captureRequest.build(), object : CameraCaptureSession.CaptureCallback() {
    
    

        override fun onCaptureStarted(
                session: CameraCaptureSession,
                request: CaptureRequest,
                timestamp: Long,
                frameNumber: Long) {
    
    
            super.onCaptureStarted(session, request, timestamp, frameNumber)
            fragmentCameraBinding.viewFinder.post(animationTask)
        }

        override fun onCaptureCompleted(
                session: CameraCaptureSession,
                request: CaptureRequest,
                result: TotalCaptureResult) {
    
    
            super.onCaptureCompleted(session, request, result)
            val resultTimestamp = result.get(CaptureResult.SENSOR_TIMESTAMP)
            Log.d(TAG, "Capture result received: $resultTimestamp")

            // Set a timeout in case image captured is dropped from the pipeline
            val exc = TimeoutException("Image dequeuing took too long")
            val timeoutRunnable = Runnable {
    
     cont.resumeWithException(exc) }
            imageReaderHandler.postDelayed(timeoutRunnable, IMAGE_CAPTURE_TIMEOUT_MILLIS)

            // Loop in the coroutine's context until an image with matching timestamp comes
            // We need to launch the coroutine context again because the callback is done in
            //  the handler provided to the `capture` method, not in our coroutine context
            @Suppress("BlockingMethodInNonBlockingContext")
            lifecycleScope.launch(cont.context) {
    
    
                while (true) {
    
    

                    // Dequeue images while timestamps don't match
                    val image = imageQueue.take()
                    // TODO(owahltinez): b/142011420
                    // if (image.timestamp != resultTimestamp) continue
                    if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q &&
                            image.format != ImageFormat.DEPTH_JPEG &&
                            image.timestamp != resultTimestamp) continue
                    Log.d(TAG, "Matching image dequeued: ${
      
      image.timestamp}")

                    // Unset the image reader listener
                    imageReaderHandler.removeCallbacks(timeoutRunnable)
                    imageReader.setOnImageAvailableListener(null, null)

                    // Clear the queue of images, if there are left
                    while (imageQueue.size > 0) {
    
    
                        imageQueue.take().close()
                    }

                    // Compute EXIF orientation metadata
                    val rotation = relativeOrientation.value ?: 0
                    val mirrored = characteristics.get(CameraCharacteristics.LENS_FACING) ==
                            CameraCharacteristics.LENS_FACING_FRONT
                    val exifOrientation = computeExifOrientation(rotation, mirrored)

                    // Build the result and resume progress
                    cont.resume(CombinedCaptureResult(
                            image, result, exifOrientation, imageReader.imageFormat))

                    // There is no need to break out of the loop, this coroutine will suspend
                }
            }
        }
    }, cameraHandler)
}

分析:

  1. 触发拍照:

createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE)

扫描二维码关注公众号,回复: 14852976 查看本文章
  1. 监听结果回调:imageReader.setOnImageAvailableListener

三、总结

总体上来说Camera2Basic只是一个非常简单的用例Demo,供开发者熟悉API。

附录:Camera2核心代码

class CameraFragment : Fragment() {
    
    

/** Android ViewBinding */
private var _fragmentCameraBinding: FragmentCameraBinding? = null

private val fragmentCameraBinding get() = _fragmentCameraBinding!!

/** AndroidX navigation arguments */
private val args: CameraFragmentArgs by navArgs()

/** Host's navigation controller */
private val navController: NavController by lazy {
    
    
    Navigation.findNavController(requireActivity(), R.id.fragment_container)
}

/** Detects, characterizes, and connects to a CameraDevice (used for all camera operations) */
private val cameraManager: CameraManager by lazy {
    
    
    val context = requireContext().applicationContext
    context.getSystemService(Context.CAMERA_SERVICE) as CameraManager
}

/** [CameraCharacteristics] corresponding to the provided Camera ID */
private val characteristics: CameraCharacteristics by lazy {
    
    
    cameraManager.getCameraCharacteristics(args.cameraId)
}

/** Readers used as buffers for camera still shots */
private lateinit var imageReader: ImageReader

/** [HandlerThread] where all camera operations run */
private val cameraThread = HandlerThread("CameraThread").apply {
    
     start() }

/** [Handler] corresponding to [cameraThread] */
private val cameraHandler = Handler(cameraThread.looper)

/** Performs recording animation of flashing screen */
private val animationTask: Runnable by lazy {
    
    
    Runnable {
    
    
        // Flash white animation
        fragmentCameraBinding.overlay.background = Color.argb(150, 255, 255, 255).toDrawable()
        // Wait for ANIMATION_FAST_MILLIS
        fragmentCameraBinding.overlay.postDelayed({
    
    
            // Remove white flash animation
            fragmentCameraBinding.overlay.background = null
        }, CameraActivity.ANIMATION_FAST_MILLIS)
    }
}

/** [HandlerThread] where all buffer reading operations run */
private val imageReaderThread = HandlerThread("imageReaderThread").apply {
    
     start() }

/** [Handler] corresponding to [imageReaderThread] */
private val imageReaderHandler = Handler(imageReaderThread.looper)

/** The [CameraDevice] that will be opened in this fragment */
private lateinit var camera: CameraDevice

/** Internal reference to the ongoing [CameraCaptureSession] configured with our parameters */
private lateinit var session: CameraCaptureSession

/** Live data listener for changes in the device orientation relative to the camera */
private lateinit var relativeOrientation: OrientationLiveData

override fun onCreateView(
        inflater: LayoutInflater,
        container: ViewGroup?,
        savedInstanceState: Bundle?
): View {
    
    
    _fragmentCameraBinding = FragmentCameraBinding.inflate(inflater, container, false)
    return fragmentCameraBinding.root
}

@SuppressLint("MissingPermission")
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
    
    
    super.onViewCreated(view, savedInstanceState)
    fragmentCameraBinding.captureButton.setOnApplyWindowInsetsListener {
    
     v, insets ->
        v.translationX = (-insets.systemWindowInsetRight).toFloat()
        v.translationY = (-insets.systemWindowInsetBottom).toFloat()
        insets.consumeSystemWindowInsets()
    }

    fragmentCameraBinding.viewFinder.holder.addCallback(object : SurfaceHolder.Callback {
    
    
        override fun surfaceDestroyed(holder: SurfaceHolder) = Unit

        override fun surfaceChanged(
                holder: SurfaceHolder,
                format: Int,
                width: Int,
                height: Int) = Unit

        override fun surfaceCreated(holder: SurfaceHolder) {
    
    
            // Selects appropriate preview size and configures view finder
            val previewSize = getPreviewOutputSize(
                fragmentCameraBinding.viewFinder.display,
                characteristics,
                SurfaceHolder::class.java
            )
            Log.d(TAG, "View finder size: ${fragmentCameraBinding.viewFinder.width} x ${fragmentCameraBinding.viewFinder.height}")
            Log.d(TAG, "Selected preview size: $previewSize")
            fragmentCameraBinding.viewFinder.setAspectRatio(
                previewSize.width,
                previewSize.height
            )

            // To ensure that size is set, initialize camera in the view's thread
            view.post {
    
     initializeCamera() }
        }
    })

    // Used to rotate the output media to match device orientation
    relativeOrientation = OrientationLiveData(requireContext(), characteristics).apply {
    
    
        observe(viewLifecycleOwner, Observer {
    
     orientation ->
            Log.d(TAG, "Orientation changed: $orientation")
        })
    }
}

/**
 * Begin all camera operations in a coroutine in the main thread. This function:
 * - Opens the camera
 * - Configures the camera session
 * - Starts the preview by dispatching a repeating capture request
 * - Sets up the still image capture listeners
 */
private fun initializeCamera() = lifecycleScope.launch(Dispatchers.Main) {
    
    
    // Open the selected camera
    camera = openCamera(cameraManager, args.cameraId, cameraHandler)

    // Initialize an image reader which will be used to capture still photos
    val size = characteristics.get(
            CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!
            .getOutputSizes(args.pixelFormat).maxByOrNull {
    
     it.height * it.width }!!
    imageReader = ImageReader.newInstance(
            size.width, size.height, args.pixelFormat, IMAGE_BUFFER_SIZE)

    // Creates list of Surfaces where the camera will output frames
    val targets = listOf(fragmentCameraBinding.viewFinder.holder.surface, imageReader.surface)

    // Start a capture session using our open camera and list of Surfaces where frames will go
    session = createCaptureSession(camera, targets, cameraHandler)

    val captureRequest = camera.createCaptureRequest(
            CameraDevice.TEMPLATE_PREVIEW).apply {
    
     addTarget(fragmentCameraBinding.viewFinder.holder.surface) }

    // This will keep sending the capture request as frequently as possible until the
    // session is torn down or session.stopRepeating() is called
    session.setRepeatingRequest(captureRequest.build(), null, cameraHandler)

    // Listen to the capture button
    fragmentCameraBinding.captureButton.setOnClickListener {
    
    

        // Disable click listener to prevent multiple requests simultaneously in flight
        it.isEnabled = false

        // Perform I/O heavy operations in a different scope
        lifecycleScope.launch(Dispatchers.IO) {
    
    
            takePhoto().use {
    
     result ->
                Log.d(TAG, "Result received: $result")

                // Save the result to disk
                val output = saveResult(result)
                Log.d(TAG, "Image saved: ${output.absolutePath}")

                // If the result is a JPEG file, update EXIF metadata with orientation info
                if (output.extension == "jpg") {
    
    
                    val exif = ExifInterface(output.absolutePath)
                    exif.setAttribute(
                            ExifInterface.TAG_ORIENTATION, result.orientation.toString())
                    exif.saveAttributes()
                    Log.d(TAG, "EXIF metadata saved: ${output.absolutePath}")
                }

                // Display the photo taken to user
                lifecycleScope.launch(Dispatchers.Main) {
    
    
                    navController.navigate(CameraFragmentDirections
                            .actionCameraToJpegViewer(output.absolutePath)
                            .setOrientation(result.orientation)
                            .setDepth(Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q &&
                                    result.format == ImageFormat.DEPTH_JPEG))
                }
            }

            // Re-enable click listener after photo is taken
            it.post {
    
     it.isEnabled = true }
        }
    }
}

/** Opens the camera and returns the opened device (as the result of the suspend coroutine) */
@SuppressLint("MissingPermission")
private suspend fun openCamera(
        manager: CameraManager,
        cameraId: String,
        handler: Handler? = null
): CameraDevice = suspendCancellableCoroutine {
    
     cont ->
    manager.openCamera(cameraId, object : CameraDevice.StateCallback() {
    
    
        override fun onOpened(device: CameraDevice) = cont.resume(device)

        override fun onDisconnected(device: CameraDevice) {
    
    
            Log.w(TAG, "Camera $cameraId has been disconnected")
            requireActivity().finish()
        }

        override fun onError(device: CameraDevice, error: Int) {
    
    
            val msg = when (error) {
    
    
                ERROR_CAMERA_DEVICE -> "Fatal (device)"
                ERROR_CAMERA_DISABLED -> "Device policy"
                ERROR_CAMERA_IN_USE -> "Camera in use"
                ERROR_CAMERA_SERVICE -> "Fatal (service)"
                ERROR_MAX_CAMERAS_IN_USE -> "Maximum cameras in use"
                else -> "Unknown"
            }
            val exc = RuntimeException("Camera $cameraId error: ($error) $msg")
            Log.e(TAG, exc.message, exc)
            if (cont.isActive) cont.resumeWithException(exc)
        }
    }, handler)
}

/**
 * Starts a [CameraCaptureSession] and returns the configured session (as the result of the
 * suspend coroutine
 */
private suspend fun createCaptureSession(
        device: CameraDevice,
        targets: List<Surface>,
        handler: Handler? = null
): CameraCaptureSession = suspendCoroutine {
    
     cont ->

    // Create a capture session using the predefined targets; this also involves defining the
    // session state callback to be notified of when the session is ready
    device.createCaptureSession(targets, object : CameraCaptureSession.StateCallback() {
    
    

        override fun onConfigured(session: CameraCaptureSession) = cont.resume(session)

        override fun onConfigureFailed(session: CameraCaptureSession) {
    
    
            val exc = RuntimeException("Camera ${device.id} session configuration failed")
            Log.e(TAG, exc.message, exc)
            cont.resumeWithException(exc)
        }
    }, handler)
}

/**
 * Helper function used to capture a still image using the [CameraDevice.TEMPLATE_STILL_CAPTURE]
 * template. It performs synchronization between the [CaptureResult] and the [Image] resulting
 * from the single capture, and outputs a [CombinedCaptureResult] object.
 */
private suspend fun takePhoto():
        CombinedCaptureResult = suspendCoroutine {
    
     cont ->

    // Flush any images left in the image reader
    @Suppress("ControlFlowWithEmptyBody")
    while (imageReader.acquireNextImage() != null) {
    
    
    }

    // Start a new image queue
    val imageQueue = ArrayBlockingQueue<Image>(IMAGE_BUFFER_SIZE)
    imageReader.setOnImageAvailableListener({
    
     reader ->
        val image = reader.acquireNextImage()
        Log.d(TAG, "Image available in queue: ${image.timestamp}")
        imageQueue.add(image)
    }, imageReaderHandler)

    val captureRequest = session.device.createCaptureRequest(
            CameraDevice.TEMPLATE_STILL_CAPTURE).apply {
    
     addTarget(imageReader.surface) }
    session.capture(captureRequest.build(), object : CameraCaptureSession.CaptureCallback() {
    
    

        override fun onCaptureStarted(
                session: CameraCaptureSession,
                request: CaptureRequest,
                timestamp: Long,
                frameNumber: Long) {
    
    
            super.onCaptureStarted(session, request, timestamp, frameNumber)
            fragmentCameraBinding.viewFinder.post(animationTask)
        }

        override fun onCaptureCompleted(
                session: CameraCaptureSession,
                request: CaptureRequest,
                result: TotalCaptureResult) {
    
    
            super.onCaptureCompleted(session, request, result)
            val resultTimestamp = result.get(CaptureResult.SENSOR_TIMESTAMP)
            Log.d(TAG, "Capture result received: $resultTimestamp")

            // Set a timeout in case image captured is dropped from the pipeline
            val exc = TimeoutException("Image dequeuing took too long")
            val timeoutRunnable = Runnable {
    
     cont.resumeWithException(exc) }
            imageReaderHandler.postDelayed(timeoutRunnable, IMAGE_CAPTURE_TIMEOUT_MILLIS)

            // Loop in the coroutine's context until an image with matching timestamp comes
            // We need to launch the coroutine context again because the callback is done in
            //  the handler provided to the `capture` method, not in our coroutine context
            @Suppress("BlockingMethodInNonBlockingContext")
            lifecycleScope.launch(cont.context) {
    
    
                while (true) {
    
    

                    // Dequeue images while timestamps don't match
                    val image = imageQueue.take()
                    // TODO(owahltinez): b/142011420
                    // if (image.timestamp != resultTimestamp) continue
                    if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q &&
                            image.format != ImageFormat.DEPTH_JPEG &&
                            image.timestamp != resultTimestamp) continue
                    Log.d(TAG, "Matching image dequeued: ${image.timestamp}")

                    // Unset the image reader listener
                    imageReaderHandler.removeCallbacks(timeoutRunnable)
                    imageReader.setOnImageAvailableListener(null, null)

                    // Clear the queue of images, if there are left
                    while (imageQueue.size > 0) {
    
    
                        imageQueue.take().close()
                    }

                    // Compute EXIF orientation metadata
                    val rotation = relativeOrientation.value ?: 0
                    val mirrored = characteristics.get(CameraCharacteristics.LENS_FACING) ==
                            CameraCharacteristics.LENS_FACING_FRONT
                    val exifOrientation = computeExifOrientation(rotation, mirrored)

                    // Build the result and resume progress
                    cont.resume(CombinedCaptureResult(
                            image, result, exifOrientation, imageReader.imageFormat))

                    // There is no need to break out of the loop, this coroutine will suspend
                }
            }
        }
    }, cameraHandler)
}

/** Helper function used to save a [CombinedCaptureResult] into a [File] */
private suspend fun saveResult(result: CombinedCaptureResult): File = suspendCoroutine {
    
     cont ->
    when (result.format) {
    
    

        // When the format is JPEG or DEPTH JPEG we can simply save the bytes as-is
        ImageFormat.JPEG, ImageFormat.DEPTH_JPEG -> {
    
    
            val buffer = result.image.planes[0].buffer
            val bytes = ByteArray(buffer.remaining()).apply {
    
     buffer.get(this) }
            try {
    
    
                val output = createFile(requireContext(), "jpg")
                FileOutputStream(output).use {
    
     it.write(bytes) }
                cont.resume(output)
            } catch (exc: IOException) {
    
    
                Log.e(TAG, "Unable to write JPEG image to file", exc)
                cont.resumeWithException(exc)
            }
        }

        // When the format is RAW we use the DngCreator utility library
        ImageFormat.RAW_SENSOR -> {
    
    
            val dngCreator = DngCreator(characteristics, result.metadata)
            try {
    
    
                val output = createFile(requireContext(), "dng")
                FileOutputStream(output).use {
    
     dngCreator.writeImage(it, result.image) }
                cont.resume(output)
            } catch (exc: IOException) {
    
    
                Log.e(TAG, "Unable to write DNG image to file", exc)
                cont.resumeWithException(exc)
            }
        }

        // No other formats are supported by this sample
        else -> {
    
    
            val exc = RuntimeException("Unknown image format: ${result.image.format}")
            Log.e(TAG, exc.message, exc)
            cont.resumeWithException(exc)
        }
    }
}

override fun onStop() {
    
    
    super.onStop()
    try {
    
    
        camera.close()
    } catch (exc: Throwable) {
    
    
        Log.e(TAG, "Error closing camera", exc)
    }
}

override fun onDestroy() {
    
    
    super.onDestroy()
    cameraThread.quitSafely()
    imageReaderThread.quitSafely()
}

override fun onDestroyView() {
    
    
    _fragmentCameraBinding = null
    super.onDestroyView()
}

companion object {
    
    
    private val TAG = CameraFragment::class.java.simpleName

    /** Maximum number of images that will be held in the reader's buffer */
    private const val IMAGE_BUFFER_SIZE: Int = 3

    /** Maximum time allowed to wait for the result of an image capture */
    private const val IMAGE_CAPTURE_TIMEOUT_MILLIS: Long = 5000

    /** Helper data class used to hold capture metadata with their associated image */
    data class CombinedCaptureResult(
            val image: Image,
            val metadata: CaptureResult,
            val orientation: Int,
            val format: Int
    ) : Closeable {
    
    
        override fun close() = image.close()
    }

    /**
     * Create a [File] named a using formatted timestamp with the current date and time.
     *
     * @return [File] created.
     */
    private fun createFile(context: Context, extension: String): File {
    
    
        val sdf = SimpleDateFormat("yyyy_MM_dd_HH_mm_ss_SSS", Locale.US)
        return File(context.filesDir, "IMG_${sdf.format(Date())}.$extension")
    }
}
}

猜你喜欢

转载自blog.csdn.net/Scott_S/article/details/123009666