使用 CameraX 在 Jetpack Compose 中构建相机 Android 应用程序

使用 CameraX 在 Jetpack Compose 中构建相机 Android 应用程序

Jetpack Compose+ CameraX
CameraX 是一个 Jetpack 库,旨在帮助简化相机应用程序的开发。

[camerax官方文档] https://developer.android.com/training/camerax

CameraX的几个用例:

  • Image Capture
  • Video Capture
  • Preview
  • Image analyze
    具体如何使用相关用例,请查看上面的官方链接。
    下面仅就视频录制用例来叙述相关实现流程。

视频录制

camerax

  1. 添加camerax依赖
// CameraX
cameraxVersion = '1.2.0-beta01'
implementation "androidx.camera:camera-lifecycle:$cameraxVersion"
implementation "androidx.camera:camera-video:$cameraxVersion"
implementation "androidx.camera:camera-view:$cameraxVersion"
implementation "androidx.camera:camera-extensions:$cameraxVersion"

// Accompanist
accompanistPermissionsVersion = '0.23.1'
implementation "com.google.accompanist:accompanist-permissions:$accompanistPermissionsVersion"

在录制之前,需要请求摄像头和音频权限,代码如下:

val permissionState = rememberMultiplePermissionsState(
    permissions = listOf(
        Manifest.permission.CAMERA,
        Manifest.permission.RECORD_AUDIO
    )
)

LaunchedEffect(Unit) {
    
    
    permissionState.launchMultiplePermissionRequest()
}

PermissionsRequired(
    multiplePermissionsState = permissionState,
    permissionsNotGrantedContent = {
    
     /* ... */ },
    permissionsNotAvailableContent = {
    
     /* ... */ }
) {
    
    
  // Rest of the compose code will be here
}

创建录制对象

val context = LocalContext.current
val lifecycleOwner = LocalLifecycleOwner.current

var recording: Recording? = remember {
    
     null }
val previewView: PreviewView = remember {
    
     PreviewView(context) }
val videoCapture: MutableState<VideoCapture<Recorder>?> = remember {
    
     mutableStateOf(null) }
val recordingStarted: MutableState<Boolean> = remember {
    
     mutableStateOf(false) }

val audioEnabled: MutableState<Boolean> = remember {
    
     mutableStateOf(false) }
val cameraSelector: MutableState<CameraSelector> = remember {
    
    
    mutableStateOf(CameraSelector.DEFAULT_BACK_CAMERA)
}

LaunchedEffect(previewView) {
    
    
    videoCapture.value = context.createVideoCaptureUseCase(
        lifecycleOwner = lifecycleOwner,
        cameraSelector = cameraSelector.value,
        previewView = previewView
    )
}

录制(Recording)是一个对象,允许我们控制当前活动的录制。它允许我们停止、暂停和恢复当前的录制。我们在开始录制时创建该对象。

PreviewView 是一个自定义视图,用于显示摄像头的视频。我们将其与生命周期绑定,将其添加到 AndroidView 中,它将显示我们当前正在录制的内容。

VideoCapture 是一个通用类,提供适用于视频应用程序的摄像头流。在这里,我们传递 Recorder 类,它是 VideoOutput 接口的实现,它允许我们开始录制。

recordingStartedaudioEnabled 是辅助变量,我们将在该屏幕上使用它们,它们的含义应该很明显。

CameraSelector 是一组用于选择摄像头或返回经过筛选的摄像头集合的要求和优先级。在这里,我们将仅使用默认的前置和后置摄像头。

LaunchedEffect 中,我们调用一个函数来创建一个视频捕获用例。该函数的示例如下:

suspend fun Context.createVideoCaptureUseCase(
    lifecycleOwner: LifecycleOwner,
    cameraSelector: CameraSelector,
    previewView: PreviewView
): VideoCapture<Recorder> {
    
    
    val preview = Preview.Builder()
        .build()
        .apply {
    
     setSurfaceProvider(previewView.surfaceProvider) }

    val qualitySelector = QualitySelector.from(
        Quality.FHD,
        FallbackStrategy.lowerQualityOrHigherThan(Quality.FHD)
    )
    val recorder = Recorder.Builder()
        .setExecutor(mainExecutor)
        .setQualitySelector(qualitySelector)
        .build()
    val videoCapture = VideoCapture.withOutput(recorder)

    val cameraProvider = getCameraProvider()
    cameraProvider.unbindAll()
    cameraProvider.bindToLifecycle(
        lifecycleOwner,
        cameraSelector,
        preview,
        videoCapture
    )

    return videoCapture
}

首先,我们创建一个 Preview,它是一个用例,用于提供用于在屏幕上显示的摄像头预览流。我们可以在这里设置多个参数,如纵横比、捕获处理器、图像信息处理器等。由于我们不需要这些参数,所以创建一个普通的 Preview 对象。

接下来是选择视频的质量。为此,我们使用QualitySelector定义所需的质量设置。我们希望使用全高清(Full HD)质量,因此我们将传递 Quality.FHD。某些手机可能没有所需的质量设置,因此您应该始终有备选方案,就像我们在这里通过传递 FallbackStrategy 一样。有几种策略可供选择:

  • higherQualityOrLowerThan — 选择最接近并高于输入质量的质量。如果无法得到支持的质量设置,则选择最接近并低于输入质量的质量。
  • higherQualityThan — 选择最接近并高于输入质量的质量。
  • lowerQualityOrHigherThan — 选择最接近并低于输入质量的质量。如果无法得到支持的质量设置,则选择最接近并高于输入质量的质量。
  • lowerQualityThan — 选择最接近并低于输入质量的质量。
    另一种方法是只传递Quality.LOWEST Quality.HIGHEST,这可能是更简单的方式,但我也想展示这种方式。

现在,我们创建一个 Recorder 并使用它通过调用 VideoCapture.withOutput(recorder) 来获取 VideoCapture 对象。

相机提供程序是 ProcessCameraProvider 单例的对象,它允许我们将相机的生命周期绑定到应用程序进程中的任何 LifecycleOwner。我们使用的用于获取相机提供程序的函数是:

suspend fun Context.getCameraProvider(): ProcessCameraProvider = suspendCoroutine {
    
     continuation ->
    ProcessCameraProvider.getInstance(this).also {
    
     future ->
        future.addListener(
            {
    
    
                continuation.resume(future.get())
            },
            mainExecutor
        )
    }
}

ProcessCameraProvider.getInstance(this) 返回一个 Future,我们需要等待它完成以获取实例。

接下来,我们需要将所有内容绑定到生命周期,并传递 lifecycleOwnercameraSelectorpreview videoCapture

现在是时候完成其余的 Compose 代码了,希望您还在我身边!

PermissionsRequired内容块中,我们添加AndroidView和用于录制的按钮。代码如下:

AndroidView(
    factory = {
    
     previewView },
    modifier = Modifier.fillMaxSize()
)
IconButton(
    onClick = {
    
    
        if (!recordingStarted.value) {
    
    
            videoCapture.value?.let {
    
     videoCapture ->
                recordingStarted.value = true
                val mediaDir = context.externalCacheDirs.firstOrNull()?.let {
    
    
                    File(it, context.getString(R.string.app_name)).apply {
    
     mkdirs() }
                }

                recording = startRecordingVideo(
                    context = context,
                    filenameFormat = "yyyy-MM-dd-HH-mm-ss-SSS",
                    videoCapture = videoCapture,
                    outputDirectory = if (mediaDir != null && mediaDir.exists()) mediaDir else context.filesDir,
                    executor = context.mainExecutor,
                    audioEnabled = audioEnabled.value
                ) {
    
     event ->
                    // Process events that we get while recording
                }
            }
        } else {
    
    
            recordingStarted.value = false
            recording?.stop()
        }
    },
    modifier = Modifier
        .align(Alignment.BottomCenter)
        .padding(bottom = 32.dp)
) {
    
    
    Icon(
        painter = painterResource(if (recordingStarted.value) R.drawable.ic_stop else R.drawable.ic_record),
        contentDescription = "",
        modifier = Modifier.size(64.dp)
    )
}

AndroidView 将显示我们的预览。

至于按钮,我们将用它来启动和停止录制。当我们想要开始录制时,首先获取媒体目录,如果目录不存在,我们将创建它。接下来调用 startRecordingVideo 函数,函数的代码如下:

fun startRecordingVideo(
    context: Context,
    filenameFormat: String,
    videoCapture: VideoCapture<Recorder>,
    outputDirectory: File,
    executor: Executor,
    audioEnabled: Boolean,
    consumer: Consumer<VideoRecordEvent>
): Recording {
    
    
    val videoFile = File(
        outputDirectory,
        SimpleDateFormat(filenameFormat, Locale.US).format(System.currentTimeMillis()) + ".mp4"
    )

    val outputOptions = FileOutputOptions.Builder(videoFile).build()

    return videoCapture.output
        .prepareRecording(context, outputOptions)
        .apply {
    
     if (audioEnabled) withAudioEnabled() }
        .start(executor, consumer)
}

这是一个简单的函数,它创建一个文件,准备录制并开始录制。如果启用了音频,我们还将启用音频录制。该函数返回的对象将用于停止录制。consumer参数是一个回调,在每个事件发生时都会被调用。您可以使用它在视频录制完成后获取文件的 URI

让我们为音频和相机选择器添加逻辑。

if (!recordingStarted.value) {
    
    
    IconButton(
        onClick = {
    
    
            audioEnabled.value = !audioEnabled.value
        },
        modifier = Modifier
            .align(Alignment.BottomStart)
            .padding(bottom = 32.dp)
    ) {
    
    
        Icon(
            painter = painterResource(if (audioEnabled.value) R.drawable.ic_mic_on else R.drawable.ic_mic_off),
            contentDescription = "",
            modifier = Modifier.size(64.dp)
        )
    }
}
if (!recordingStarted.value) {
    
    
    IconButton(
        onClick = {
    
    
            cameraSelector.value =
                if (cameraSelector.value == CameraSelector.DEFAULT_BACK_CAMERA) CameraSelector.DEFAULT_FRONT_CAMERA
                else CameraSelector.DEFAULT_BACK_CAMERA
            lifecycleOwner.lifecycleScope.launch {
    
    
                videoCapture.value = context.createVideoCaptureUseCase(
                    lifecycleOwner = lifecycleOwner,
                    cameraSelector = cameraSelector.value,
                    previewView = previewView
                )
            }
        },
        modifier = Modifier
            .align(Alignment.BottomEnd)
            .padding(bottom = 32.dp)
    ) {
    
    
        Icon(
            painter = painterResource(R.drawable.ic_switch_camera),
            contentDescription = "",
            modifier = Modifier.size(64.dp)
        )
    }
}

这两个按钮将启用或禁用音频,并在前置和后置摄像头之间进行切换。当我们切换摄像头时,我们需要创建一个新的 VideoCapture 对象来改变预览显示的内容。
camerax

这就是该屏幕的全部内容,但是现在我们希望能够查看我们录制的内容,对吗?当然了,为此,我们将创建另一个屏幕并使用ExoPlayer来显示视频。

首先,让我们在 consumer 回调函数中添加逻辑:

if (event is VideoRecordEvent.Finalize) {
    
    
    val uri = event.outputResults.outputUri
    if (uri != Uri.EMPTY) {
    
    
        val uriEncoded = URLEncoder.encode(
            uri.toString(),
            StandardCharsets.UTF_8.toString()
        )
        navController.navigate("${
      
      Route.VIDEO_PREVIEW}/$uriEncoded")
    }
}

如果事件是 VideoRecordEvent.Finalize,这意味着录制已经完成,我们可以获取视频的 URI。有几个视频录制事件可以使用,你可以选择任何一个,但在这里我们只需要 Finalize

  • Start
  • Finalize
  • Status
  • Pause
  • Resume

如果视频太短,例如不到半秒钟,URI 可能为空,这就是我们需要那个 if 语句的原因。
为了将 URI 作为导航参数传递,它应该被编码。

这个屏幕的最终代码如下:

@OptIn(ExperimentalPermissionsApi::class)
@Composable
fun VideoCaptureScreen(
    navController: NavController
) {
    
    
    val context = LocalContext.current
    val lifecycleOwner = LocalLifecycleOwner.current

    val permissionState = rememberMultiplePermissionsState(
        permissions = listOf(
            Manifest.permission.CAMERA,
            Manifest.permission.RECORD_AUDIO
        )
    )

    var recording: Recording? = remember {
    
     null }
    val previewView: PreviewView = remember {
    
     PreviewView(context) }
    val videoCapture: MutableState<VideoCapture<Recorder>?> = remember {
    
     mutableStateOf(null) }
    val recordingStarted: MutableState<Boolean> = remember {
    
     mutableStateOf(false) }

    val audioEnabled: MutableState<Boolean> = remember {
    
     mutableStateOf(false) }
    val cameraSelector: MutableState<CameraSelector> = remember {
    
    
        mutableStateOf(CameraSelector.DEFAULT_BACK_CAMERA)
    }

    LaunchedEffect(Unit) {
    
    
        permissionState.launchMultiplePermissionRequest()
    }

    LaunchedEffect(previewView) {
    
    
        videoCapture.value = context.createVideoCaptureUseCase(
            lifecycleOwner = lifecycleOwner,
            cameraSelector = cameraSelector.value,
            previewView = previewView
        )
    }
    PermissionsRequired(
        multiplePermissionsState = permissionState,
        permissionsNotGrantedContent = {
    
     /* ... */ },
        permissionsNotAvailableContent = {
    
     /* ... */ }
    ) {
    
    
        Box(
            modifier = Modifier.fillMaxSize()
        ) {
    
    
            AndroidView(
                factory = {
    
     previewView },
                modifier = Modifier.fillMaxSize()
            )
            IconButton(
                onClick = {
    
    
                    if (!recordingStarted.value) {
    
    
                        videoCapture.value?.let {
    
     videoCapture ->
                            recordingStarted.value = true
                            val mediaDir = context.externalCacheDirs.firstOrNull()?.let {
    
    
                                File(it, context.getString(R.string.app_name)).apply {
    
     mkdirs() }
                            }

                            recording = startRecordingVideo(
                                context = context,
                                filenameFormat = "yyyy-MM-dd-HH-mm-ss-SSS",
                                videoCapture = videoCapture,
                                outputDirectory = if (mediaDir != null && mediaDir.exists()) mediaDir else context.filesDir,
                                executor = context.mainExecutor,
                                audioEnabled = audioEnabled.value
                            ) {
    
     event ->
                                if (event is VideoRecordEvent.Finalize) {
    
    
                                    val uri = event.outputResults.outputUri
                                    if (uri != Uri.EMPTY) {
    
    
                                        val uriEncoded = URLEncoder.encode(
                                            uri.toString(),
                                            StandardCharsets.UTF_8.toString()
                                        )
                                        navController.navigate("${
      
      Route.VIDEO_PREVIEW}/$uriEncoded")
                                    }
                                }
                            }
                        }
                    } else {
    
    
                        recordingStarted.value = false
                        recording?.stop()
                    }
                },
                modifier = Modifier
                    .align(Alignment.BottomCenter)
                    .padding(bottom = 32.dp)
            ) {
    
    
                Icon(
                    painter = painterResource(if (recordingStarted.value) R.drawable.ic_stop else R.drawable.ic_record),
                    contentDescription = "",
                    modifier = Modifier.size(64.dp)
                )
            }
            if (!recordingStarted.value) {
    
    
                IconButton(
                    onClick = {
    
    
                        audioEnabled.value = !audioEnabled.value
                    },
                    modifier = Modifier
                        .align(Alignment.BottomStart)
                        .padding(bottom = 32.dp)
                ) {
    
    
                    Icon(
                        painter = painterResource(if (audioEnabled.value) R.drawable.ic_mic_on else R.drawable.ic_mic_off),
                        contentDescription = "",
                        modifier = Modifier.size(64.dp)
                    )
                }
            }
            if (!recordingStarted.value) {
    
    
                IconButton(
                    onClick = {
    
    
                        cameraSelector.value =
                            if (cameraSelector.value == CameraSelector.DEFAULT_BACK_CAMERA) CameraSelector.DEFAULT_FRONT_CAMERA
                            else CameraSelector.DEFAULT_BACK_CAMERA
                        lifecycleOwner.lifecycleScope.launch {
    
    
                            videoCapture.value = context.createVideoCaptureUseCase(
                                lifecycleOwner = lifecycleOwner,
                                cameraSelector = cameraSelector.value,
                                previewView = previewView
                            )
                        }
                    },
                    modifier = Modifier
                        .align(Alignment.BottomEnd)
                        .padding(bottom = 32.dp)
                ) {
    
    
                    Icon(
                        painter = painterResource(R.drawable.ic_switch_camera),
                        contentDescription = "",
                        modifier = Modifier.size(64.dp)
                    )
                }
            }
        }
    }
}

ExoPlayer

camerax

ExoPlayer 是 Android 的 MediaPlayer API 的替代品,可用于播放本地和互联网上的音频和视频。它更易于使用,并提供更多功能。此外,它易于定制和扩展。

现在我们了解了 ExoPlayer,让我们创建下一个屏幕。添加依赖项:

//ExoPlayer Library
exoPlayerVersion = '2.18.1'
implementation "com.google.android.exoplayer:exoplayer:$exoPlayerVersion"

我们的代码像下面这样:

@Composable
fun VideoPreviewScreen(
    uri: String
) {
    
    
    val context = LocalContext.current

    val exoPlayer = remember(context) {
    
    
        ExoPlayer.Builder(context).build().apply {
    
    
            setMediaItem(MediaItem.fromUri(uri))
            prepare()
        }
    }

    DisposableEffect(
        Box(
            modifier = Modifier.fillMaxSize()
        ) {
    
    
            AndroidView(
                factory = {
    
     context ->
                    StyledPlayerView(context).apply {
    
    
                        player = exoPlayer
                    }
                },
                modifier = Modifier.fillMaxSize()
            )
        }
    ) {
    
    
        onDispose {
    
    
            exoPlayer.release()
        }
    }
}

我们将使用构建器来创建 ExoPlayer,设置要加载的视频的 URI,然后准备播放器。

我们使用 AndroidView 来显示视频,并将 StyledPlayerView 附加到它上面。

StyledPlayerView 是用于播放器媒体播放的高级视图。它在播放期间显示视频、字幕和专辑封面,并使用 StyledPlayerControlView 显示播放控件。
StyledPlayerView 可以通过设置属性(或调用相应的方法)或覆盖绘图进行自定义。
camerax

源码地址

https://github.com/Giga99/CameraApp

猜你喜欢

转载自blog.csdn.net/u011897062/article/details/130824467