Android 音频开发(四) 如何播放一帧音频数据下

再看这一篇文章前,如果你是小白,我建议你先看一下Android 音频开发(一) 基础入门篇这一篇。今天继续讲解如何通过Android SDK自带API实现播放一帧音频数据。

我们都知道,Android SDK 自带了三种API实现播放一帧音频数据,他们分别是MediaPlayer,SoundPool,AudioTrack,如果你对这三种API不是很了解,可以先看我的上一篇Android 音频开发(三) 如何播放一帧音频数据上这一篇,这里不在详细介绍。简单来说,MediaPlayer 更加适合在后台长时间播放本地音乐文件或者在线的流式资源; SoundPool 则适合播放比较短的音频片段,比如游戏声音、按键声、铃声片段等等,它可以同时播放多个音频; 而 AudioTrack 则更接近底层,提供了非常强大的控制能力,支持低延迟播放,适合流媒体和VoIP语音电话等场景。

音频的开发,更广泛地应用不仅仅局限于播放本地文件或者音频片段,因此,本文重点关注如何利AudioTrack API 来播放音频数据(注意,使用AudioTrack播放的音频必须是解码后的PCM数据)。

1. AudioTrack 的工作流程

在上一篇我们知道AudioTrack 的工作流程大致分如下四步:

  1. 配置参数,初始化内部的音频播放缓冲区
  2. 开始播放
  3. 需要一个线程,不断地向 AudioTrack 的缓冲区“写入”音频数据,注意,这个过程一定要及时,否则就会出现“underrun”的错误,该错误在音频开发中比较常见,意味着应用层没有及时地“送入”音频数据,导致内部的音频播放缓冲区为空。
  4. 停止播放,释放资源

2. AudioTrack 的参数配置

在看参数配配置前,我们先看看构造方法,然后一一介绍各个配置参数的意义:

    /**
     * Class constructor.
     * @param streamType the type of the audio stream. See
     *   {@link AudioManager#STREAM_VOICE_CALL}, {@link AudioManager#STREAM_SYSTEM},
     *   {@link AudioManager#STREAM_RING}, {@link AudioManager#STREAM_MUSIC},
     *   {@link AudioManager#STREAM_ALARM}, and {@link AudioManager#STREAM_NOTIFICATION}.
     * @param sampleRateInHz the initial source sample rate expressed in Hz.
     *   {@link AudioFormat#SAMPLE_RATE_UNSPECIFIED} means to use a route-dependent value
     *   which is usually the sample rate of the sink.
     *   {@link #getSampleRate()} can be used to retrieve the actual sample rate chosen.
     * @param channelConfig describes the configuration of the audio channels.
     *   See {@link AudioFormat#CHANNEL_OUT_MONO} and
     *   {@link AudioFormat#CHANNEL_OUT_STEREO}
     * @param audioFormat the format in which the audio data is represented.
     *   See {@link AudioFormat#ENCODING_PCM_16BIT},
     *   {@link AudioFormat#ENCODING_PCM_8BIT},
     *   and {@link AudioFormat#ENCODING_PCM_FLOAT}.
     * @param bufferSizeInBytes the total size (in bytes) of the internal buffer where audio data is
     *   read from for playback. This should be a nonzero multiple of the frame size in bytes.
     *   <p> If the track's creation mode is {@link #MODE_STATIC},
     *   this is the maximum length sample, or audio clip, that can be played by this instance.
     *   <p> If the track's creation mode is {@link #MODE_STREAM},
     *   this should be the desired buffer size
     *   for the <code>AudioTrack</code> to satisfy the application's
     *   latency requirements.
     *   If <code>bufferSizeInBytes</code> is less than the
     *   minimum buffer size for the output sink, it is increased to the minimum
     *   buffer size.
     *   The method {@link #getBufferSizeInFrames()} returns the
     *   actual size in frames of the buffer created, which
     *   determines the minimum frequency to write
     *   to the streaming <code>AudioTrack</code> to avoid underrun.
     *   See {@link #getMinBufferSize(int, int, int)} to determine the estimated minimum buffer size
     *   for an AudioTrack instance in streaming mode.
     * @param mode streaming or static buffer. See {@link #MODE_STATIC} and {@link #MODE_STREAM}
     * @throws java.lang.IllegalArgumentException
     * @deprecated use {@link Builder} or
     *   {@link #AudioTrack(AudioAttributes, AudioFormat, int, int, int)} to specify the
     *   {@link AudioAttributes} instead of the stream type which is only for volume control.
     */
    public AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat,
            int bufferSizeInBytes, int mode)
    throws IllegalArgumentException {
        this(streamType, sampleRateInHz, channelConfig, audioFormat,
                bufferSizeInBytes, mode, AudioManager.AUDIO_SESSION_ID_GENERATE);
    }

    /**
     * Class constructor with audio session. Use this constructor when the AudioTrack must be
     * attached to a particular audio session. The primary use of the audio session ID is to
     * associate audio effects to a particular instance of AudioTrack: if an audio session ID
     * is provided when creating an AudioEffect, this effect will be applied only to audio tracks
     * and media players in the same session and not to the output mix.
     * When an AudioTrack is created without specifying a session, it will create its own session
     * which can be retrieved by calling the {@link #getAudioSessionId()} method.
     * If a non-zero session ID is provided, this AudioTrack will share effects attached to this
     * session
     * with all other media players or audio tracks in the same session, otherwise a new session
     * will be created for this track if none is supplied.
     * @param streamType the type of the audio stream. See
     *   {@link AudioManager#STREAM_VOICE_CALL}, {@link AudioManager#STREAM_SYSTEM},
     *   {@link AudioManager#STREAM_RING}, {@link AudioManager#STREAM_MUSIC},
     *   {@link AudioManager#STREAM_ALARM}, and {@link AudioManager#STREAM_NOTIFICATION}.
     * @param sampleRateInHz the initial source sample rate expressed in Hz.
     *   {@link AudioFormat#SAMPLE_RATE_UNSPECIFIED} means to use a route-dependent value
     *   which is usually the sample rate of the sink.
     * @param channelConfig describes the configuration of the audio channels.
     *   See {@link AudioFormat#CHANNEL_OUT_MONO} and
     *   {@link AudioFormat#CHANNEL_OUT_STEREO}
     * @param audioFormat the format in which the audio data is represented.
     *   See {@link AudioFormat#ENCODING_PCM_16BIT} and
     *   {@link AudioFormat#ENCODING_PCM_8BIT},
     *   and {@link AudioFormat#ENCODING_PCM_FLOAT}.
     * @param bufferSizeInBytes the total size (in bytes) of the internal buffer where audio data is
     *   read from for playback. This should be a nonzero multiple of the frame size in bytes.
     *   <p> If the track's creation mode is {@link #MODE_STATIC},
     *   this is the maximum length sample, or audio clip, that can be played by this instance.
     *   <p> If the track's creation mode is {@link #MODE_STREAM},
     *   this should be the desired buffer size
     *   for the <code>AudioTrack</code> to satisfy the application's
     *   latency requirements.
     *   If <code>bufferSizeInBytes</code> is less than the
     *   minimum buffer size for the output sink, it is increased to the minimum
     *   buffer size.
     *   The method {@link #getBufferSizeInFrames()} returns the
     *   actual size in frames of the buffer created, which
     *   determines the minimum frequency to write
     *   to the streaming <code>AudioTrack</code> to avoid underrun.
     *   You can write data into this buffer in smaller chunks than this size.
     *   See {@link #getMinBufferSize(int, int, int)} to determine the estimated minimum buffer size
     *   for an AudioTrack instance in streaming mode.
     * @param mode streaming or static buffer. See {@link #MODE_STATIC} and {@link #MODE_STREAM}
     * @param sessionId Id of audio session the AudioTrack must be attached to
     * @throws java.lang.IllegalArgumentException
     * @deprecated use {@link Builder} or
     *   {@link #AudioTrack(AudioAttributes, AudioFormat, int, int, int)} to specify the
     *   {@link AudioAttributes} instead of the stream type which is only for volume control.
     */
    public AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat,
            int bufferSizeInBytes, int mode, int sessionId)
    throws IllegalArgumentException {
        // mState already == STATE_UNINITIALIZED
        this((new AudioAttributes.Builder())
                    .setLegacyStreamType(streamType)
                    .build(),
                (new AudioFormat.Builder())
                    .setChannelMask(channelConfig)
                    .setEncoding(audioFormat)
                    .setSampleRate(sampleRateInHz)
                    .build(),
                bufferSizeInBytes,
                mode, sessionId);
        deprecateStreamTypeForPlayback(streamType, "AudioTrack", "AudioTrack()");
    }

    /**
     * Class constructor with {@link AudioAttributes} and {@link AudioFormat}.
     * @param attributes a non-null {@link AudioAttributes} instance.
     * @param format a non-null {@link AudioFormat} instance describing the format of the data
     *     that will be played through this AudioTrack. See {@link AudioFormat.Builder} for
     *     configuring the audio format parameters such as encoding, channel mask and sample rate.
     * @param bufferSizeInBytes the total size (in bytes) of the internal buffer where audio data is
     *   read from for playback. This should be a nonzero multiple of the frame size in bytes.
     *   <p> If the track's creation mode is {@link #MODE_STATIC},
     *   this is the maximum length sample, or audio clip, that can be played by this instance.
     *   <p> If the track's creation mode is {@link #MODE_STREAM},
     *   this should be the desired buffer size
     *   for the <code>AudioTrack</code> to satisfy the application's
     *   latency requirements.
     *   If <code>bufferSizeInBytes</code> is less than the
     *   minimum buffer size for the output sink, it is increased to the minimum
     *   buffer size.
     *   The method {@link #getBufferSizeInFrames()} returns the
     *   actual size in frames of the buffer created, which
     *   determines the minimum frequency to write
     *   to the streaming <code>AudioTrack</code> to avoid underrun.
     *   See {@link #getMinBufferSize(int, int, int)} to determine the estimated minimum buffer size
     *   for an AudioTrack instance in streaming mode.
     * @param mode streaming or static buffer. See {@link #MODE_STATIC} and {@link #MODE_STREAM}.
     * @param sessionId ID of audio session the AudioTrack must be attached to, or
     *   {@link AudioManager#AUDIO_SESSION_ID_GENERATE} if the session isn't known at construction
     *   time. See also {@link AudioManager#generateAudioSessionId()} to obtain a session ID before
     *   construction.
     * @throws IllegalArgumentException
     */
    public AudioTrack(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,
            int mode, int sessionId)
                    throws IllegalArgumentException {
        this(attributes, format, bufferSizeInBytes, mode, sessionId, false /*offload*/,
                ENCAPSULATION_MODE_NONE, null /* tunerConfiguration */);
    }

    private AudioTrack(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,
            int mode, int sessionId, boolean offload, int encapsulationMode,
            @Nullable TunerConfiguration tunerConfiguration)
                    throws IllegalArgumentException {
        super(attributes, AudioPlaybackConfiguration.PLAYER_TYPE_JAM_AUDIOTRACK);
        // mState already == STATE_UNINITIALIZED

        mConfiguredAudioAttributes = attributes; // object copy not needed, immutable.

        if (format == null) {
            throw new IllegalArgumentException("Illegal null AudioFormat");
        }

        // Check if we should enable deep buffer mode
        if (shouldEnablePowerSaving(mAttributes, format, bufferSizeInBytes, mode)) {
            mAttributes = new AudioAttributes.Builder(mAttributes)
                .replaceFlags((mAttributes.getAllFlags()
                        | AudioAttributes.FLAG_DEEP_BUFFER)
                        & ~AudioAttributes.FLAG_LOW_LATENCY)
                .build();
        }

        // remember which looper is associated with the AudioTrack instantiation
        Looper looper;
        if ((looper = Looper.myLooper()) == null) {
            looper = Looper.getMainLooper();
        }

        int rate = format.getSampleRate();
        if (rate == AudioFormat.SAMPLE_RATE_UNSPECIFIED) {
            rate = 0;
        }

        int channelIndexMask = 0;
        if ((format.getPropertySetMask()
                & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_CHANNEL_INDEX_MASK) != 0) {
            channelIndexMask = format.getChannelIndexMask();
        }
        int channelMask = 0;
        if ((format.getPropertySetMask()
                & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_CHANNEL_MASK) != 0) {
            channelMask = format.getChannelMask();
        } else if (channelIndexMask == 0) { // if no masks at all, use stereo
            channelMask = AudioFormat.CHANNEL_OUT_FRONT_LEFT
                    | AudioFormat.CHANNEL_OUT_FRONT_RIGHT;
        }
        int encoding = AudioFormat.ENCODING_DEFAULT;
        if ((format.getPropertySetMask() & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_ENCODING) != 0) {
            encoding = format.getEncoding();
        }
        audioParamCheck(rate, channelMask, channelIndexMask, encoding, mode);
        mOffloaded = offload;
        mStreamType = AudioSystem.STREAM_DEFAULT;

        audioBuffSizeCheck(bufferSizeInBytes);

        mInitializationLooper = looper;

        if (sessionId < 0) {
            throw new IllegalArgumentException("Invalid audio session ID: "+sessionId);
        }

        int[] sampleRate = new int[] {mSampleRate};
        int[] session = new int[1];
        session[0] = sessionId;
        // native initialization
        int initResult = native_setup(new WeakReference<AudioTrack>(this), mAttributes,
                sampleRate, mChannelMask, mChannelIndexMask, mAudioFormat,
                mNativeBufferSizeInBytes, mDataLoadMode, session, 0 /*nativeTrackInJavaObj*/,
                offload, encapsulationMode, tunerConfiguration);
        if (initResult != SUCCESS) {
            loge("Error code "+initResult+" when initializing AudioTrack.");
            return; // with mState == STATE_UNINITIALIZED
        }

        mSampleRate = sampleRate[0];
        mSessionId = session[0];

        // TODO: consider caching encapsulationMode and tunerConfiguration in the Java object.

        if ((mAttributes.getFlags() & AudioAttributes.FLAG_HW_AV_SYNC) != 0) {
            int frameSizeInBytes;
            if (AudioFormat.isEncodingLinearFrames(mAudioFormat)) {
                frameSizeInBytes = mChannelCount * AudioFormat.getBytesPerSample(mAudioFormat);
            } else {
                frameSizeInBytes = 1;
            }
            mOffset = ((int) Math.ceil(HEADER_V2_SIZE_BYTES / frameSizeInBytes)) * frameSizeInBytes;
        }

        if (mDataLoadMode == MODE_STATIC) {
            mState = STATE_NO_STATIC_DATA;
        } else {
            mState = STATE_INITIALIZED;
        }

        baseRegisterPlayer();
    }

    /**
     * A constructor which explicitly connects a Native (C++) AudioTrack. For use by
     * the AudioTrackRoutingProxy subclass.
     * @param nativeTrackInJavaObj a C/C++ pointer to a native AudioTrack
     * (associated with an OpenSL ES player).
     * IMPORTANT: For "N", this method is ONLY called to setup a Java routing proxy,
     * i.e. IAndroidConfiguration::AcquireJavaProxy(). If we call with a 0 in nativeTrackInJavaObj
     * it means that the OpenSL player interface hasn't been realized, so there is no native
     * Audiotrack to connect to. In this case wait to call deferred_connect() until the
     * OpenSLES interface is realized.
     */
    /*package*/ AudioTrack(long nativeTrackInJavaObj) {
        super(new AudioAttributes.Builder().build(),
                AudioPlaybackConfiguration.PLAYER_TYPE_JAM_AUDIOTRACK);
        // "final"s
        mNativeTrackInJavaObj = 0;
        mJniData = 0;

        // remember which looper is associated with the AudioTrack instantiation
        Looper looper;
        if ((looper = Looper.myLooper()) == null) {
            looper = Looper.getMainLooper();
        }
        mInitializationLooper = looper;

        // other initialization...
        if (nativeTrackInJavaObj != 0) {
            baseRegisterPlayer();
            deferred_connect(nativeTrackInJavaObj);
        } else {
            mState = STATE_UNINITIALIZED;
        }
    }

1:streamType

      这个参数代表着当前应用使用的哪一种音频管理策略,当系统有多个进程需要播放音频时,这个管理策略会决定最终的展现效果,该参数的可选的值以常量的形式定义在 AudioManager 类中,主要包括:

STREAM_VOCIE_CALL:电话声音

STREAM_SYSTEM:系统声音

STREAM_RING:铃声

STREAM_MUSCI:音乐声

STREAM_ALARM:警告声

STREAM_NOTIFICATION:通知声    采样率,从AudioTrack源码的“audioParamCheck”函数可以看到,这个采样率的取值范围必须在 4000Hz~192000Hz 之间。

2:sampleRateInHz 

采样率,从AudioTrack源码的“audioParamCheck”函数可以看到,这个采样率的取值范围必须在 4000Hz~192000Hz 之间。

3:channelConfig

通道数的配置,可选的值以常量的形式定义在 AudioFormat 类中,常用的是 CHANNEL_IN_MONO(单通道),CHANNEL_IN_STEREO(双通道)

4:audioFormat

这个参数是用来配置“数据位宽”的,可选的值也是以常量的形式定义在 AudioFormat 类中,常用的是 ENCODING_PCM_16BIT(16bit),ENCODING_PCM_8BIT(8bit),注意,前者是可以保证兼容所有Android手机的。

5:bufferSizeInBytes

这个是最难理解又最重要的一个参数,它配置的是 AudioTrack 内部的音频缓冲区的大小,该缓冲区的值不能低于一帧“音频帧”(Frame)的大小,而前一篇文章介绍过,一帧音频帧的大小计算如下:

int size = 采样率 x 位宽 x 采样时间 x 通道数

采样时间一般取 2.5ms~120ms 之间,由厂商或者具体的应用决定,我们其实可以推断,每一帧的采样时间取得越短,产生的延时就应该会越小,当然,碎片化的数据也就会越多。

在Android开发中,AudioTrack 类提供了一个帮助你确定这个 bufferSizeInBytes 的函数,原型如下:

int getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat);

不同的厂商的底层实现是不一样的,但无外乎就是根据上面的计算公式得到一帧的大小,音频缓冲区的大小则必须是一帧大小的2~N倍,有兴趣的朋友可以继续深入源码探究探究。

实际开发中,强烈建议由该函数计算出需要传入的 bufferSizeInBytes,而不是自己手动计算。

6:mode

AudioTrack 提供了两种播放模式,一种是 static 方式,一种是 streaming 方式,前者需要一次性将所有的数据都写入播放缓冲区,简单高效,通常用于播放铃声、系统提醒的音频片段; 后者则是按照一定的时间间隔不间断地写入音频数据,理论上它可用于任何音频播放的场景。

可选的值以常量的形式定义在 AudioTrack 类中,一个是 MODE_STATIC,另一个是 MODE_STREAM,根据具体的应用传入对应的值即可。

3. AudioTrack 的封装

下面是我对AudioTrack 类的接口简单封装了一下,提供了一个 AudioPlayer 工具类,先贴出代码如下:

package com.bnd.myaudioandvideo.utils

import android.media.AudioFormat
import android.media.AudioManager
import android.media.AudioRecord
import android.media.AudioTrack
import android.os.Process
import java.io.DataInputStream
import java.io.File
import java.io.FileInputStream

class AudioTrackManager {
    private var mAudioTrack: AudioTrack? = null
    private var mDis //播放文件的数据流
            : DataInputStream? = null
    private var mRecordThread: Thread? = null
    private var isStart = false

    //指定缓冲区大小。调用AudioRecord类的getMinBufferSize方法可以获得。
    private var mMinBufferSize = 0
    private fun initData() {
        //根据采样率,采样精度,单双声道来得到frame的大小。
        mMinBufferSize = AudioTrack.getMinBufferSize(mSampleRateInHz, mChannelConfig, mAudioFormat) //计算最小缓冲区
        //注意,按照数字音频的知识,这个算出来的是一秒钟buffer的大小。
        //创建AudioTrack
        mAudioTrack = AudioTrack(mStreamType, mSampleRateInHz, mChannelConfig,
                mAudioFormat, mMinBufferSize, mMode)

    }

    /**
     * 销毁线程方法
     */
    private fun destroyThread() {
        try {
            isStart = false
            if (null != mRecordThread && Thread.State.RUNNABLE == mRecordThread!!.state) {
                try {
                    Thread.sleep(500)
                    mRecordThread!!.interrupt()
                } catch (e: Exception) {
                    mRecordThread = null
                }
            }
            mRecordThread = null
        } catch (e: Exception) {
            e.printStackTrace()
        } finally {
            mRecordThread = null
        }
    }

    /**
     * 启动播放线程
     */
    private fun startThread() {
        destroyThread()
        isStart = true
        if (mRecordThread == null) {
            mRecordThread = Thread(recordRunnable)
            mRecordThread!!.start()
        }
    }

    /**
     * 播放线程
     */
    var recordRunnable = Runnable {
        try {
            //设置线程的优先级
            Process.setThreadPriority(Process.THREAD_PRIORITY_URGENT_AUDIO)
            val tempBuffer = ByteArray(mMinBufferSize)
            var readCount = 0
            while (mDis!!.available() > 0) {
                readCount = mDis!!.read(tempBuffer)
                if (readCount == AudioTrack.ERROR_INVALID_OPERATION || readCount == AudioTrack.ERROR_BAD_VALUE) {
                    continue
                }
                if (readCount != 0 && readCount != -1) { //一边播放一边写入语音数据
                    //判断AudioTrack未初始化,停止播放的时候释放了,状态就为STATE_UNINITIALIZED
                    if (mAudioTrack!!.state == AudioTrack.STATE_UNINITIALIZED) {
                        initData()
                    }
                    mAudioTrack!!.play()
                    mAudioTrack!!.write(tempBuffer, 0, readCount)
                }
            }
            stopPlay() //播放完就停止播放
        } catch (e: Exception) {
            e.printStackTrace()
        }
    }

    /**
     * 播放文件
     * @param path
     * @throws Exception
     */
    @Throws(Exception::class)
    private fun setPath(path: String) {
        val file = File(path)
        mDis = DataInputStream(FileInputStream(file))
    }

    /**
     * 启动播放
     *
     * @param path
     */
    fun startPlay(path: String) {
        try {
//            //AudioTrack未初始化
//            if(mAudioTrack.getState() == AudioTrack.STATE_UNINITIALIZED){
//                throw new RuntimeException("The AudioTrack is not uninitialized");
//            }//AudioRecord.getMinBufferSize的参数是否支持当前的硬件设备
//            else if (AudioTrack.ERROR_BAD_VALUE == mMinBufferSize || AudioTrack.ERROR == mMinBufferSize) {
//                throw new RuntimeException("AudioTrack Unable to getMinBufferSize");
//            }else{
            setPath(path)
            startThread()
            //            }
        } catch (e: Exception) {
            e.printStackTrace()
        }
    }

    /**
     * 停止播放
     */
    fun stopPlay() {
        try {
            destroyThread() //销毁线程
            if (mAudioTrack != null) {
                if (mAudioTrack!!.state == AudioRecord.STATE_INITIALIZED) { //初始化成功
                    mAudioTrack!!.stop() //停止播放
                }
                if (mAudioTrack != null) {
                    mAudioTrack!!.release() //释放audioTrack资源
                }
            }
            if (mDis != null) {
                mDis!!.close() //关闭数据输入流
            }
        } catch (e: Exception) {
            e.printStackTrace()
        }
    }

    companion object {
        @Volatile
        private var mInstance: AudioTrackManager? = null

        //音频流类型
        private const val mStreamType = AudioManager.STREAM_MUSIC

        //指定采样率 (MediaRecoder 的采样率通常是8000Hz AAC的通常是44100Hz。 设置采样率为44100,目前为常用的采样率,官方文档表示这个值可以兼容所有的设置)
        private const val mSampleRateInHz = 44100

        //指定捕获音频的声道数目。在AudioFormat类中指定用于此的常量
        private const val mChannelConfig = AudioFormat.CHANNEL_CONFIGURATION_MONO //单声道

        //指定音频量化位数 ,在AudioFormaat类中指定了以下各种可能的常量。通常我们选择ENCODING_PCM_16BIT和ENCODING_PCM_8BIT PCM代表的是脉冲编码调制,它实际上是原始音频样本。
        //因此可以设置每个样本的分辨率为16位或者8位,16位将占用更多的空间和处理能力,表示的音频也更加接近真实。
        private const val mAudioFormat = AudioFormat.ENCODING_PCM_16BIT

        //STREAM的意思是由用户在应用程序通过write方式把数据一次一次得写到audiotrack中。这个和我们在socket中发送数据一样,
        // 应用层从某个地方获取数据,例如通过编解码得到PCM数据,然后write到audiotrack。
        private const val mMode = AudioTrack.MODE_STREAM

        /**
         * 获取单例引用
         *
         * @return
         */
        val instance: AudioTrackManager?
            get() {
                if (mInstance == null) {
                    synchronized(AudioTrackManager::class.java) {
                        if (mInstance == null) {
                            mInstance = AudioTrackManager()
                        }
                    }
                }
                return mInstance
            }
    }

    init {
        initData()
    }
}

好了,至此AudioTrack的简单封装已经完毕,代码中也有详细的注释,如有不懂的地方,或者有何指教的地方欢迎大家在下方留言。

4. 总结

音频开发的知识点还是很多的,学习音频开发需要大家有足够的耐心,一步一个脚印的积累,只有这样才能把音频开发学好。下面推荐几个比较好的博主,希望对大家有所帮助。

  1. csdn博主:《雷神雷霄骅》
  2. 51CTO博客:《Jhuster的专栏》

猜你喜欢

转载自blog.csdn.net/ljx1400052550/article/details/114312021
今日推荐