Android制作一个视频录制器

制作一个视频录制器

API简介

此文将介绍如何使用AudioRecord,Camera2,Surface,MediaCodec来制作一个视频录制器。

  • AudioRecord 用于录制音频,在此文中我用它来录制声音并输出PCM数据。它本身也支持其他格式的比如mp3
  • Camera2 用于录制视频,它录制的视频数据可以通过surface获取
  • MediaCodec 用来编码音视频,此文中会将音频编码为AAC数据流,将视频编码为H264数据流
  • MediaMuxer 此文中用来将编码好的AAC数据流,和H264数据流合并封装成Mp4文件

流程简介

  1. 用户打开页面时,我们打开相机预览,摄像机采集到的数据通过SurfaceView来展示给用户。
  2. 用户点击开始录制,我们通过Camera2的Api向相机服务发送录制Request,录制的数据关联在我们传给相机的Surface中。这里录制数据的surface和预览数据的surface是分开的。
  3. 开始录制的同时,我们会通过AudioRecoder.startRecording启动音频录制,并启动音频编码线程和视频编码线程。编码线程中将通过MediaCodec进行编码。
  4. 开始录制的同时,我们我们也会创建MediaMuxer。
  5. 编码线程收到编码好的数据后将数据会塞给MediaMuxer。

核心代码片段

预览

  • 我们使用的Camer2的API,通过context.getSystemService(AppCompatActivity.CAMERA_SERVICE) as CameraManager可以获取到相机服务。
  • 可以通过CameraManager获取到手机支持的相机id列表,并获取相机的信息如相机是前置还是后置,相机支持的输出格式、支持的输出fps,支持的输出大小等等,这些信息都可以通过CameraCharacteristics类获取
  • 我们可以先枚举出所有的相机,并获取他们的信息从而选择合适的相机进行打开。
data class CameraInfo(
    val cameraId: String? = null,
    // 前置还是后置
    val lenFacing: Int = -1,
    // 输出的数据旋转角度
    val orientation: Int? = null
)
// 枚举所有相机,想关注的特性封装到CameraInfo中
fun enumerateCameras(cameraManager: CameraManager): ArrayList<CameraInfo> {
    
    
        val cameraInfoList = arrayListOf<CameraInfo>()
        try {
    
    
            val cameraIdList = cameraManager.cameraIdList
            for (cameraId in cameraIdList) {
    
    
                val cameraCharacteristics =
                    cameraManager.getCameraCharacteristics(cameraId) ?: return cameraInfoList
                val lensFacing = cameraCharacteristics.get(CameraCharacteristics.LENS_FACING)
                val capabilities =
                    cameraCharacteristics.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)
                val orientation = cameraCharacteristics.get(CameraCharacteristics.SENSOR_ORIENTATION)
                cameraInfoList.add(CameraInfo(cameraId, lensFacing ?: -1,orientation))
            }
        } catch (e: CameraAccessException) {
    
    
            e.printStackTrace()
        }
        return cameraInfoList
    } 
    // 根据自己的情况选择合适的相机
 private fun findBestCameraInfo(cameraInfoList: ArrayList<CameraInfo>): CameraInfo {
    
    
        var cameraInfo: CameraInfo? =
            cameraInfoList.first {
    
     it.lenFacing == CameraCharacteristics.LENS_FACING_FRONT }
        if (cameraInfo == null) {
    
    
            cameraInfo = cameraInfoList.first()
        }
        return cameraInfo
    }
  • 选择好相机之后我们就可以进行预览。此时我们可以根据自身View的大小结合相机支持的大小选择合适的大小预览。同时可以对最终选择的大小计算出宽高比,并调整SurfaceView的宽高比。如果不调整图像会有拉伸的问题。
    选择支持合适的预览大小的代码为:
  /**
   * @param display 窗口的大小信息,根据此信息结合相机支持的大小选择合适的Size
   */
  fun getLargestPreviewSize(display: Display): SmartSize {
    
    
        mCurCameraInfo ?: SmartSize.SIZE_NONE
        val cameraCharacteristics = mCameraManager.getCameraCharacteristics(
            mCurCameraInfo?.cameraId ?: return SmartSize.SIZE_NONE
        )
        val displayPoint = Point()
        display.getRealSize(displayPoint)
        val screenSize = SmartSize(displayPoint.x, displayPoint.y)
        var hdScreen = false
        if (screenSize.width >= SmartSize.SIZE_1080P.width
            || screenSize.height >= SmartSize.SIZE_1080P.height
        ) {
    
    
            hdScreen = true
        }
        val maxSize = if (hdScreen) SmartSize.SIZE_1080P else screenSize
        val map = cameraCharacteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)
        // camera2的API在获取一些信息,它需要传入你最终会讲数据关联哪个类。可见它对不同的类支持的格式/大小等可能不一样
        // 这里SurfaceHold是为了预览的SurfaceView
        val surfaceOutputSizes =
            map?.getOutputSizes(SurfaceHolder::class.java) ?: return SmartSize.SIZE_NONE
        // 因为我们是直接将输出的surface交给MediaCodedc所以最终选择的大小也需要MediaCodec支持    
        val mediacodecOutputSizes =
            map.getOutputSizes(MediaCodec::class.java) ?: return SmartSize.SIZE_NONE
        val outputSizes = surfaceOutputSizes and mediacodecOutputSizes
        outputSizes.sortByDescending {
    
     it.width * it.height }
        val targetSize =
            outputSizes.first {
    
     it.width <= maxSize.width && it.height <= maxSize.height }
        return SmartSize(targetSize.width, targetSize.height)
    }

SurfaceView适配宽高比代码为:

class AutoFitSurfaceView @JvmOverloads
constructor(context: Context? = null, attributeSet: AttributeSet? = null, defStyle: Int = 0) :
    SurfaceView(context, attributeSet, defStyle) {
    
    
    private var aspectRatio: Float = 0F
    fun setAspectRatio(width: Int, height: Int) {
    
    
        this.aspectRatio = width.toFloat() / height.toFloat()
        holder.setFixedSize(width, height)
        requestLayout()
    }

    override fun onMeasure(widthMeasureSpec: Int, heightMeasureSpec: Int) {
    
    
        super.onMeasure(widthMeasureSpec, heightMeasureSpec)
        val width = MeasureSpec.getSize(widthMeasureSpec)
        val height = MeasureSpec.getSize(heightMeasureSpec)
        if (aspectRatio == 0f) {
    
    
            setMeasuredDimension(width, height)
        } else {
    
    
            val actualRatio = if (width > height) aspectRatio else 1f / aspectRatio
            val newWidth: Int
            val newHeight: Int
            if (width < height * actualRatio) {
    
    
                newHeight = height
                newWidth = (height * actualRatio).roundToInt()
            } else {
    
    
                newWidth = width
                newHeight = (width / actualRatio).roundToInt()
            }
            setMeasuredDimension(newWidth, newHeight)
        }
    }
}
  • 根据选择好的cameraId打开相机
   mCameraManager.openCamera(
                mCurCameraInfo?.cameraId ?: return, object : CameraDevice.StateCallback() {
    
    
                    override fun onOpened(camera: CameraDevice) {
    
    
                    // 打开成功得到CameraDevice
                        cameraOpened(camera)
                    }
                    override fun onDisconnected(camera: CameraDevice) {
    
    
                    }

                    override fun onError(camera: CameraDevice, error: Int) {
    
    
                    }
                },
                cameraHandler
            )
  • 创建一个Session,并将预览用的surface和录视频用的surface传入
    • 预览的surface指的是SurfaceView关联的surface
    • 录视频用的surface我有MediaCodec来创建,最终也会传给视频MediaCodec用于编码的输入源。此外这里用MediaCodec创建的Surface需要先设置给MediaCodec并关联的MediaCodec需要进行configure,否则在录制时会报错。
    private val videoCodecInputSurface: Surface by lazy {
    
    
        val surface = MediaCodec.createPersistentInputSurface()
        MediaFoundationFactory.createVideoMediaCodec(videoMediaFormat, surface)
        surface
    }
   // MediaFoundationFactory.kt 
    fun createVideoMediaCodec(format: MediaFormat, inputSurface: Surface): MediaCodec {
    
    
        val mediaCodecList = MediaCodecList(MediaCodecList.ALL_CODECS)
        val codecName = mediaCodecList.findEncoderForFormat(format)
        val videoCodec = MediaCodec.createByCodecName(codecName)
        videoCodec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
        videoCodec.setInputSurface(inputSurface)
        return videoCodec
    }
private fun cameraOpened(camera: CameraDevice) {
    
    
        try {
    
    
            camera.createCaptureSession(
                arrayListOf(previewSurface, videoCodecInputSurface),
                object : CameraCaptureSession.StateCallback() {
    
    
                    override fun onConfigured(session: CameraCaptureSession) {
    
    
                        // session 打开成功
                        mCameraCaptureSession = session
                        startPreview(session)
                    }
                    override fun onConfigureFailed(session: CameraCaptureSession) {
    
    
                    }
                },
                cameraHandler
            )
        } catch (e: CameraAccessException) {
    
    
            e.printStackTrace()
        } catch (e: Exception) {
    
    
            e.printStackTrace()
        }
    }
  • 向创建的session提交预览请求,体检完之后SurfaceView就可以看到相机采集到的数据了
    private val captureRequest: CaptureRequest? by lazy {
    
    
        val builder =
            mCameraCaptureSession.device.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
        builder.addTarget(previewSurface)
        builder.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, Range(FPS, FPS))
        builder.build()
    }
    
    private fun startPreview(session: CameraCaptureSession) {
    
    
        try {
    
    
            session.setRepeatingRequest(
                captureRequest ?: return,
                null,
                cameraHandler
            )
        } catch (e: CameraAccessException) {
    
    
            e.printStackTrace()
        }
    }

录制

  • 创建MediaMuxer,在编码器得到编码数据后将使用此对象进行写入
  muxer = MediaMuxer(output, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4)
  // 可以根据相机输出数据的origintation 来设置角度
  muxer.setOrientationHint(orientation)
  • 创建视频编码器
    private const val FRAME_RATE = 15
    private const val IFRAME_INTERVAL = 10
    private const val VIDEO_BIT_RATE = 2000000
    private const val AUDIO_BIT_RATE = 128000

    fun createVideoMediaFormat(width: Int, height: Int): MediaFormat {
    
    
        val mediaFormat = MediaFormat.createVideoFormat(
            MediaFormat.MIMETYPE_VIDEO_AVC,// h264
            width, 
            height
        )
        mediaFormat.setInteger(
            MediaFormat.KEY_COLOR_FORMAT,
            MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface // 因为我的输入数据是直接从surface获取,所以这样设置
        )
        // 预期编码后的比特率 
        mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, VIDEO_BIT_RATE)
        // 帧率
        mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, FRAME_RATE)
        // 每隔多少帧插入一个I帧
        mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, IFRAME_INTERVAL)
        return mediaFormat
    }

    fun createVideoMediaCodec(format: MediaFormat, inputSurface: Surface): MediaCodec {
    
    
        val mediaCodecList = MediaCodecList(MediaCodecList.ALL_CODECS)
        val codecName = mediaCodecList.findEncoderForFormat(format)
        val videoCodec = MediaCodec.createByCodecName(codecName)
        videoCodec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
        videoCodec.setInputSurface(inputSurface)
        return videoCodec
    }
  • 创建音频编码器
    private const val SAMPLE_RATE = 44100
    private const val CHANNEL_COUNT = 2
    
     fun createAudioMediaFormat(): MediaFormat {
    
    
        val mediaFormat = MediaFormat.createAudioFormat(
            MediaFormat.MIMETYPE_AUDIO_AAC,
            SAMPLE_RATE,
            CHANNEL_COUNT
        )
        // 设置预期比特率
        mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, AUDIO_BIT_RATE)
        mediaFormat.setInteger(
            MediaFormat.KEY_AAC_PROFILE,
            MediaCodecInfo.CodecProfileLevel.AACObjectELD
        )
        return mediaFormat
    }

    fun createAudioMediaCodec(format: MediaFormat): MediaCodec {
    
    
        val mediaCodecList = MediaCodecList(MediaCodecList.ALL_CODECS)
        val codecName = mediaCodecList.findEncoderForFormat(format)
        val audioCodec = MediaCodec.createByCodecName(codecName)
        audioCodec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
        return audioCodec
    }
  • 创建AudioRecord用来录音
    fun createAudioRecord(audioFormat: AudioFormat): AudioRecord {
    
    
        val minBufferSize = AudioRecord.getMinBufferSize(
            44100,
            AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT
        )
        return AudioRecord.Builder()
            .setAudioFormat(audioFormat)
            .setBufferSizeInBytes(minBufferSize * 2)
            .setAudioSource(MediaRecorder.AudioSource.DEFAULT)
            .build()
    }
  • 开始录音与结束录音
	mAudioRecord.startRecording()
	mAudioRecord.stop()
  • 视频编码
    • 因为我在创建编码器的时候设置了输入为surface,所以编码的地方我只需要获取数据就行了。
      设置输入surface的代码为:
    videoCodec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
    videoCodec.setInputSurface(inputSurface)	
    
    • 开始编码的时候需要使用
    mMediaCodec.start()
    
    • 将编码好的数据使用muxer写入mp4文件时需要传入pts用来做音视频同步,负责播放时声音和视频会不同步
        while (state == STATE_START) {
    
    
            val bufferInfo = MediaCodec.BufferInfo()
            val outputIndex = videoCodec.dequeueOutputBuffer(bufferInfo, TIME_OUT)
            if (outputIndex >= 0) {
    
    
                val outputBuffer = videoCodec.getOutputBuffer(outputIndex) ?: continue
                if (timeSync.audioUpdated()) {
    
    
                // 音视频pts同步,视频编码用surface模式无法自定义pts所以,我的解决方式在获得到第一个音频数据和第一个视频数据时计算音频pts和视频pts的diff. 之后的编码数据对于视频数据都加上之前计算的diff,从而实现同步
                    bufferInfo.presentationTimeUs =
                        timeSync.getVideoPts(bufferInfo.presentationTimeUs) 
                    muxerMp4.writeVideoSampleData(outputBuffer, bufferInfo)
                }
                videoCodec.releaseOutputBuffer(outputIndex, false)
                Log.i(TAG, "consume output buffer index $outputIndex ")
            } else if (outputIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
    
    
                muxerMp4.addVideoTrack(videoCodec.getOutputFormat())
            }
        }

  • 音频编码
    • 音频编码的逻辑根视频编码类似
    • 开始编码时都需要使用mMediaCodec.start()
        while (state == STATE_START) {
    
    
            val inputBufferIndex = audioCodec.dequeueInputBuffer(TIME_OUT)
            if (inputBufferIndex >= 0) {
    
    
                val inputBuffer = audioCodec.getInputBuffer(inputBufferIndex) ?: return
                // 读取录制好的音频数据
                val size = audioRecord.read(inputBuffer, inputBuffer.limit())
                var end = false
                if (size <= 0) {
    
    
                    end = audioRecord.recordingState == AudioRecord.RECORDSTATE_STOPPED
                }
                // 向编码器输入音频裸数据,并传入pts
                audioCodec.queueInputBuffer(
                    inputBufferIndex, 0, size, timeSync.getAudioPts(),
                    if (end) MediaCodec.BUFFER_FLAG_END_OF_STREAM else 0
                )
            }
            val bufferInfo = MediaCodec.BufferInfo()
            val outputBufferIndex = audioCodec.dequeueOutputBuffer(bufferInfo, TIME_OUT)
            if (outputBufferIndex >= 0) {
    
    
                val outputBuffer = audioCodec.getOutputBuffer(outputBufferIndex) ?: return
                // 读取编码后的数据,通过mediaMuxer写入mp4文件
                muxerMp4.writeAudioSampleData(outputBuffer, bufferInfo)
                audioCodec.releaseOutputBuffer(outputBufferIndex, false)
            } else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
    
    
                Log.i(TAG, "run: outputBufferIndex:${
      
      outputBufferIndex}")
                muxerMp4.addAudioTrack(audioCodec.getOutputFormat())
            }
        }
  • 时间同步逻辑
    • 我的同步逻辑比较粗暴,视频编码的数据我会等到第一个音频编码数据拿到之后再写入。之后我在拿到第一针视频数据后跟第一个音频数据的pts算出diff。之后的视频数据我都会在视频pts的基数上加上这个diff.
class TimeSync {
    
    

    private var fistAudioPts: Long? = null
    private var diff: Long? = null

    fun getAudioPts(): Long {
    
    
        val time = currentMicrosecond()
        if (fistAudioPts == null) {
    
    
            fistAudioPts = time
        }
        return time
    }

    fun audioUpdated(): Boolean = fistAudioPts != null

    fun getVideoPts(pts: Long): Long {
    
    
        if (diff == null) {
    
    
            diff =  pts - fistAudioPts!!
        }
        return pts+ diff!!
    }
    private fun currentMicrosecond() = System.nanoTime() / 1000
}
  • MediaMuxer使用时需要注意
    • 需要等到获取了音频/视频的outputFormat之后在使用 muxer.addTrack(videoMediaFormat)加入track
    • 等音频和视频的都加入tracker之后再调用muxer.start()
    • muxer.start之后在写入音频编码数据和视频编码数据
class MediaMuxerMp4(output: String, orientation: Int) {
    
    
    private val muxer: MediaMuxer
    private var audioTrackIndex: Int? = null
    private var videoTrackIndex: Int? = null
    private var audioReady = false
    private var videoReady = false
    private var isStarted = false

    companion object {
    
    
        private const val TAG = "MediaMuxerMp4"
    }

    init {
    
    
        muxer = MediaMuxer(output, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4)
        muxer.setOrientationHint(orientation)
    }

    fun addAudioTrack(audioMediaFormat: MediaFormat) {
    
    
        if (audioTrackIndex == null) {
    
    
            audioTrackIndex = muxer.addTrack(audioMediaFormat)
            audioReady = true
            tryToStart()
        }
    }

    fun addVideoTrack(videoMediaFormat: MediaFormat) {
    
    
        if (videoTrackIndex == null) {
    
    
            videoTrackIndex = muxer.addTrack(videoMediaFormat)
            videoReady = true
            tryToStart()
        }
    }

    fun writeAudioSampleData(byteBuffer: ByteBuffer, bufferInfo: MediaCodec.BufferInfo) {
    
    
        if (isStarted) {
    
    
            muxer.writeSampleData(audioTrackIndex ?: return, byteBuffer, bufferInfo)
        }
    }

    fun writeVideoSampleData(byteBuffer: ByteBuffer, bufferInfo: MediaCodec.BufferInfo) {
    
    
        if (isStarted) {
    
    
            muxer.writeSampleData(videoTrackIndex ?: return, byteBuffer, bufferInfo)
        }
    }

    private fun tryToStart() {
    
    
        if (audioReady && videoReady) {
    
    
            muxer.start()
            isStarted = true
        }
    }

    fun stop() {
    
    
        muxer.stop()
        isStarted = false
        audioReady = false
        videoReady = false
    }

    fun release() {
    
    
        muxer.release()
    }
}

资源的释放

  • 编码器的停止与释放
 fun destroy() {
    
    
        mMediaCodec.stop()
        mMediaCodec.release()
    }
  • MediaMuxer的释放
	muxer.release()
  • 视频编码器surface的释放
	videoCodecInputSurface.release()
  • audioRecord的释放
	mAudioRecord.release() 

源码

https://github.com/blueberryCoder/media2/tree/main/MediaRecorder/app/src/main/java/com/blueberry/mediarecorder

参考

Camera2相关API
MediaCodec
MediaMuxer

猜你喜欢

转载自blog.csdn.net/a992036795/article/details/125233637
今日推荐