MediaCodec API, to complete the AAC audio hardcoded, 5.0 asynchronous processing, recording AudioRecord

Disclaimer: This article is a blogger original article, shall not be reproduced without the bloggers allowed. https://blog.csdn.net/One_Month/article/details/90476900

There have been several articles AAC encoding hardware, but are synchronized to achieve, this uses asynchronous implementation, the code Kotlin.

Although the code is written kotlin, but in accordance with the above ideas can be copied out java

Finally, I will upload the code to github, you can view the complete process, for easy viewing, so the code is written in
an Activity

Recording and coding are set in the sub-thread
takes the side of the recording side edge to write the file encoding
6.0 Note dynamic rights issue, recording, reading and writing files
above example uses the 5.0 API

MediaCodec official schematics
The above is a schematic diagram of an official, I would start with the basic principles under
MediaCodec provides us with a set of InputBuffer and a set of outputBuffer, we have the means to get through the empty InputBuffer, filling the original audio data, then the buffer after loading also Codec encoding to let him go, he coded
after the encoded file will be placed outputBuffer, we get outputBuffer like to read the data saved to a file,
and then cycle the process to complete the coding and save all files.

Note here that a correspondence is not a INPUTBUFFER outputBuffer, may correspond to a plurality of input buffer output
buffer, for example, you add to INPUTBUFFER 1000 bytes of data, after he may encode a plurality outputBuffer placed
in a 200 or 300 might outputBuffer bytes.

The following is the actual operation

1. Complete the configuration AudioRecord

    /**
     * 初始化音频采集
     */

    private fun initAudioRecorder() {
        //设置最小缓冲区大小  根据系统提供的方法计算
        minBufferSize = AudioRecord.getMinBufferSize(
            AudioConfig.SAMPLE_RATE,
            AudioConfig.CHANNEL_CONFIG, AudioConfig.AUDIO_FORMAT
        )

        //创建音频记录器对象
        audioRecorder = AudioRecord(
            MediaRecorder.AudioSource.MIC,
            AudioConfig.SAMPLE_RATE, AudioConfig.CHANNEL_CONFIG, AudioConfig.AUDIO_FORMAT,
            minBufferSize
        )
    }

The profile follows the above AudioConfig

    const val SAMPLE_RATE = 44100  //采样率
    /**
     * CHANNEL_IN_MONO 单声道   能够保证所有设备都支持
     * CHANNEL_IN_STEREO 立体声
     */
    const val CHANNEL_CONFIG = AudioFormat.CHANNEL_IN_MONO

    /**
     * 返回的音频数据格式
     */
    const val AUDIO_FORMAT = AudioFormat.ENCODING_PCM_16BIT

    /**
     * 输出的音频声道
     */
    const val CHANNEL_OUT_CONFIG = AudioFormat.CHANNEL_OUT_MONO

2. Create LinkedBlockingQeque ready to save the recording data
as recording and coding are placed in the child thread to do, so it is difficult to transfer the data to ensure the synchronization queue saved by
reading the data correctly, save the data type is an array of bytes

 private var audioList: LinkedBlockingDeque<ByteArray>? = LinkedBlockingDeque()

3, start recording and save

//开启线程启动录音
  thread(priority = android.os.Process.THREAD_PRIORITY_URGENT_AUDIO) {
  			
          
            try {
            	//判断AudioRecord是否初始化成功
                if (AudioRecord.STATE_INITIALIZED == audioRecorder.state) {
                  	isRecording = true  //标记是否在录制中
                    audioRecorder.startRecording()
                    val outputArray = ByteArray(minBufferSize)
                    while (isRecording) {
                        
                        var readCode = audioRecorder.read(outputArray, 0, minBufferSize)
                       
                       //这个readCode还有很多小于0的数字,表示某种错误,
                       //这里为了方便就没处理
                        if (readCode > 0) {
                            val realArray = ByteArray(readCode)
                            System.arraycopy(outputArray, 0, realArray, 0, readCode)
                            //将读取的数据保存到LinkedBlockingDeque
                            audioList?.offer(realArray)
                        }
                    }
                    //录制结束,添加结束标记,方便编码时候知道录制结束了
                    //这个结束标记只是一个例子,不一定必须是这个数组,只要能
                    //区别正常数据,能被作为特殊的数据被识别就行
                    val stopArray = byteArrayOf((-777).toByte(), (-888).toByte())
                    audioList?.offer(stopArray)
                }
            } catch (e: IOException) {

            } finally {
                if (audioRecorder != null)
                	//释放资源,别忘记!!!
                    audioRecorder.release()

            }

        }

Completion of the addition of data using the above-identified manner admission end audio, encoding time may be determined based on this
whether the encoded stream has reached the end

4. Configure and start coding (open sub-task thread)
Mandatory data encoding and decoding
The picture shows the encoding and decoding necessary data, if the decoding of MediaFormat is MediaExtractor out, usually
directly configured, you do not need to fill out the

   fun mediaCodecEncodeToAAC() {

        val currentTime = Date().time * 1000
        try {

            val isSupprot = isSupprotAAC()
            //创建音频MediaFormat,也可以new的方式创建,不过那样需要
            //自己再setXXX设置数据
            val encodeFormat =
                MediaFormat.createAudioFormat(MediaFormat.MIMETYPE_AUDIO_AAC, 
                AudioConfig.SAMPLE_RATE, 1)
            /**
				下面配置几个比较关键的参数
			*/
            //配置比特率
            encodeFormat.setInteger(MediaFormat.KEY_BIT_RATE, 96000)
            //配置AAC描述,AAC有很多规格LC是其中一个
            encodeFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, 
            MediaCodecInfo.CodecProfileLevel.AACObjectLC)
            //配置最大输入大小,这里配置的是前面起算的大小的2倍
            encodeFormat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, minBufferSize * 2)

            //初始化编码器
            mediaEncode = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_AUDIO_AAC)
            //设置异步回调,后面会贴出callback实现
            mediaEncode.setCallback(callback)
            //调用configure,进入configured状态
            mediaEncode.configure(encodeFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
            //调用start,进入starting状态
            mediaEncode.start()
        } catch (e: IOException) {
           
        } finally {
        }
    }

The following is the code that implements the callback

object : MediaCodec.Callback() {
                override fun onOutputFormatChanged(codec: MediaCodec, format: MediaFormat) {
                }

                override fun onError(codec: MediaCodec, e: MediaCodec.CodecException) {
                    Log.i("error", e.message)
                }
				//系统获取到有可用的输出buffer时候自动回调
                override fun onOutputBufferAvailable(
                    codec: MediaCodec,
                    index: Int,
                    info: MediaCodec.BufferInfo
                ) {
                  
                   //通过bufferinfo获取Buffer的数据,这些数据就是编码后的数据
                    val outBitsSize = info.size
					//为AAC文件添加头部,头部占7字节
					//AAC有 ADIF和ADTS两种  ADIF只有一个头部剩下都是音频文件
					//ADTS是每一段编码都有一个头部
					//outpacketSize是最后头部加上返回数据后的总大小
                    val outPacketSize = outBitsSize + 7 // 7 is ADTS size

					//根据index获取buffer
                    val outputBuffer = codec.getOutputBuffer(index)

					//防止buffer有offset导致自己从0开始获取,
					取错数据(但是我实验的offset都为0,可能有些不为0的情况)
                    outputBuffer.position(info.offset)
                    
					//设置buffer的操作上限位置,不清楚的可以查下ByteBuffer(NIO知识),
					//了解limit ,position,clear(),filp()都是啥作用
                    outputBuffer.limit(info.offset + outBitsSize)
                    
					//创建byte数组保存组合数据
                    val outData = ByteArray(outPacketSize)
					
					//为数据添加头部,后面会贴出,就是在头部写入7个数据
                    addADTStoPacket(AudioConfig.SAMPLE_RATE, outData, outPacketSize)
					
					//将buffer的数据存入数组中
                    outputBuffer.get(outData, 7, outBitsSize)

                    outputBuffer.position(info.offset)


					//bufferedOutputStream是我创建的包装流,
					//包装的FileOutputStream

					//将数据写到文件
					 bufferedOutputStream.write(outData)
                    bufferedOutputStream.flush()
                    outputBuffer.clear()
                    //释放输出buffer!!!!!  一定要释放
                    codec.releaseOutputBuffer(index, false)
                }

				/**
					当系统有可用的输入buffer就会自动回调
				*/
                override fun onInputBufferAvailable(codec: MediaCodec, index: Int) {

					//根据index获取buffer
                    val inputBuffer = codec.getInputBuffer(index)
					
					//从LinkBlockingDeque中获取还未编码的原音频数据
                    val pop = audioList?.poll()
      
      				//判断是否到达音频数据的结尾,这个条件根据自己设定的结束标志而定,
      				//这里我是这样判断
                    if (pop != null && pop.size >= 2 && (pop[0] == (-777).toByte() && pop[1] == (-888).toByte())) {
                        //结束标志
                        isEndTip = true
                    }
					
					//如果数据不为空,而且不是结束标志,写入buffer,让MediaCodec去编码
					//currentTime是之前创建的变量Date().getTime(),下面用当前时间减去他,
					//是为了最终传入的数据小点

                    if (pop != null && !isEndTip) {
             
             			//填入数据
                        inputBuffer?.clear()
                        inputBuffer?.limit(pop.size)
                        inputBuffer?.put(pop, 0, pop.size)
                        //将buffer还给MediaCodec,这个一定要还
                        //第四个参数为时间戳,也就是,必须是递增的,系统根据这个计算
                        //音频总时长和时间间隔
                        codec.queueInputBuffer(
                            index,
                            0,
                            pop.size,
                            Date().time * 1000 - currentTime,
                            0
                        )
                    }

					//由于2个线程谁先执行不确定,所以可能编码线程先启动,获取到队列的数据为null
					//而且也不是结尾数据,这个时候也要调用queueInputBuffer,将buffer换回去,写入
					//数据大小就写0
					
                    //如果为null就不调用queueInputBuffer  回调几次后就会导致无可用InputBuffer,
                   // 从而导致MediaCodec任务结束 只能写个配置文件
                    if (pop == null && !isEndTip) {

                        codec.queueInputBuffer(
                                index,
                                0,
                                0,
                                Date().time * 1000 - currentTime,
                                0
                        )
                    }
					
					//发现结束标志,写入结束标志,
					//flag为MediaCodec.BUFFER_FLAG_END_OF_STREAM
					//通知编码结束
                    if (isEndTip) {
                        codec.queueInputBuffer(
                            index,
                            0,
                            0,
                            Date().time * 1000 - currentTime,
                            MediaCodec.BUFFER_FLAG_END_OF_STREAM
                        )
                    }
                }

            })

NOTE: Note
1, onInputBufferAvaliable callback, found acquired ByteArray is null, that is, recording
not have to remember to enter data acquired inputBuffer also back calls queueInputBuffer,
otherwise no one will find inputBuffer also lead to no available buffer, this method would not be in the callback,
I could no longer write it! ! ! ! ! ! ! ! ! ! ! ! ! ! ! Lesson of blood, before and forgot to
cause each time the file size is 9B (2 bytes written to system default, 7 byte header added)

2. The perceived end of the recording, there is no data, and write to the stream end flag, let onInputBufferAvaliable
stop callback, no longer receives the encoded data, flag
MediaCodec.BUFFER_FLAG_END_OF_STREAM

5. Write header file

Header files I find the Internet directly

   /**
     * 添加ADTS头,如果要与视频流合并就不用添加,单独AAC文件就需要添加,否则无法正常播放
     * @param sampleRateType 就是之前配置的采样率
     * @param packet  之前创建的字节数组,保存头和编码后音频数据
     * @param packetLen 字节数组总长度
     */
    fun addADTStoPacket(sampleRateType: Int, packet: ByteArray, packetLen: Int) {
        val profile = 2 // AAC LC
        val chanCfg = 1 // 声道数  这个就是你之前配置的声道数量

        packet[0] = 0xFF.toByte()
        packet[1] = 0xF9.toByte()
        packet[2] = ((profile - 1 shl 6) + (sampleRateType shl 2) + (chanCfg shr 2)).toByte()
        packet[3] = ((chanCfg and 3 shl 6) + (packetLen shr 11)).toByte()
        packet[4] = (packetLen and 0x7FF shr 3).toByte()
        packet[5] = ((packetLen and 7 shl 5) + 0x1F).toByte()
        packet[6] = 0xFC.toByte()
    }

6. Create Output File
This step may not be necessary, because we write to the file, the system found no files will be created
, but if the path set problem, when it is possible to write always write not go, I encounter that the data is not zero, but
after the final write to the file size is 0, so the safe side I always create the file, if you encounter the same situation, you can
try this step

//filesDir 就是context.getFilesDir
  file = File(filesDir, "record.aac")
  if (!file.exists()) {
  		//不存在就创建
        file.createNewFile()
  }
  	
  if (file.isDirectory) {
  }
   else {
            outputStream = FileOutputStream(file, true)
            bufferedOutputStream = BufferedOutputStream(outputStream, 4096)
   }

The complete code addresses

Code MediaCodecForAACActivity can see
if any errors are welcome to put forward comments

Tips: different working in a thread is best not to do, such as recording and encoding and decoding data recording or taking
! ! ! ! ! ! ! ! ! ! ! ! ! ! It is likely to go wrong, especially the players, not easy to separate no sound

Guess you like

Origin blog.csdn.net/One_Month/article/details/90476900