Android Framework 音频子系统(05)AudioTrack使用案例

该系列文章总纲链接:专题分纲目录 Android Framework 音频子系统​​​​​​​


本章关键点总结 & 说明:

本章节主要关注➕ 以上思维导图左上 AudioTrack 部分 即可。主要是分析了两个测试程( C++层的 shared_mem_test 和 Java层的MediaAudioTrackTest.java)进而分析了 Java层AudioTrack 到C++层AudioTrack调用流程(构造器流程和write流程)。


这里使用两个测试程序来看,分别是 C++层的 shared_mem_test 和 Java层的MediaAudioTrackTest.java。这两个测试程序的目录分别是:

  • frameworks/base/media/tests/audiotests/shared_mem_test.cpp
  • frameworks/base/media/tests/mediaframeworktest/src/com/android/mediaframeworktest/functional/audio/MediaAudioTrackTest.java

1 shared_mem_test 测试程序分析

shared_mem_test测试程序的入口main函数为:

int main(int argc, char *argv[]) {
    return android::main();
}

继续分析,android::main(),代码实现如下:

int main() {
    ProcessState::self()->startThreadPool();
    AudioTrackTest *test;

    test = new AudioTrackTest();
    test->Execute();
    delete test;

    return 0;
}

这里主要分析两个关键点:AudioTrackTest 构造器和 它的Execute方法。

@1 AudioTrackTest 构造器

AudioTrackTest的构造器代码实现如下:

AudioTrackTest::AudioTrackTest(void) {
    InitSine();         // init sine table
}

继续分析InitSine,代码实现如下:

void AudioTrackTest::InitSine(void) {
    double phi = 0;
    double dPhi = 2 * M_PI / SIN_SZ;
    for(int i0 = 0; i0<SIN_SZ; i0++) {
        long d0;

        d0 = 32768. * sin(phi);
        phi += dPhi;
        if(d0 >= 32767) d0 = 32767;
        if(d0 <= -32768) d0 = -32768;
        sin1024[i0] = (short)d0;
    }
}

这里主要是构造数据和波形,为下一步做准备。

@2 AudioTrackTest的Execute方法

AudioTrackTest的Execute方法 实现如下:

void AudioTrackTest::Execute(void) {
    if (Test01() == 0) {
        ALOGD("01 passed\n");
    } else {
        ALOGD("01 failed\n");
    }
}

继续分析Test01,代码实现如下:

int AudioTrackTest::Test01() {

    sp<MemoryDealer> heap;
    sp<IMemory> iMem;
    uint8_t* p;

    short smpBuf[BUF_SZ];
    long rate = 44100;
    unsigned long phi;
    unsigned long dPhi;
    long amplitude;
    long freq = 1237;
    float f0;

    f0 = pow(2., 32.) * freq / (float)rate;
    dPhi = (unsigned long)f0;
    amplitude = 1000;
    phi = 0;
    Generate(smpBuf, BUF_SZ, amplitude, phi, dPhi);  // fill buffer

    for (int i = 0; i < 1024; i++) {
        heap = new MemoryDealer(1024*1024, "AudioTrack Heap Base");

        iMem = heap->allocate(BUF_SZ*sizeof(short));

        p = static_cast<uint8_t*>(iMem->pointer());
        memcpy(p, smpBuf, BUF_SZ*sizeof(short));
        //关键点1:创建一个AudioTrack对象,并传递数据
        sp<AudioTrack> track = new AudioTrack(AUDIO_STREAM_MUSIC,// stream type
               rate,
               AUDIO_FORMAT_PCM_16_BIT,// word length, PCM
               AUDIO_CHANNEL_OUT_MONO,
               iMem);

        status_t status = track->initCheck();
        if(status != NO_ERROR) {
            track.clear();
            ALOGD("Failed for initCheck()");
            return -1;
        }

        //关键点2:开始播放
        track->start();
        usleep(20000);

        ALOGD("stop");
        track->stop();
        iMem.clear();
        heap.clear();
        usleep(20000);
    }
    return 0;
}

通过上面的分析可以知道,Native层的AudioTrack的使用流程是:

  1. 构建参数并传递给AudioTrack,注意,这里iMem中含有带数据的Buffer
  2. 执行start操作开始播放

2 MediaAudioTrackTest 测试程序分析

MediaAudioTrackTest中有很多测试用例,这里分析其中一个用例,代码实现如下:

public class MediaAudioTrackTest extends ActivityInstrumentationTestCase2<MediaFrameworkTest> {    
    private String TAG = "MediaAudioTrackTest";
    //...
    //Test case 1: setPlaybackHeadPosition() on playing track
    @LargeTest
    public void testSetPlaybackHeadPositionPlaying() throws Exception {
        // constants for test
        final String TEST_NAME = "testSetPlaybackHeadPositionPlaying";
        final int TEST_SR = 22050;
        final int TEST_CONF = AudioFormat.CHANNEL_OUT_MONO;
        final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
        final int TEST_MODE = AudioTrack.MODE_STREAM;
        final int TEST_STREAM_TYPE = AudioManager.STREAM_MUSIC;
        
        //-------- initialization --------------
        int minBuffSize = AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT);
        //关键点1:构建并传递参数给AudioTrack对象
        AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT, 
                2*minBuffSize, TEST_MODE);
        byte data[] = new byte[minBuffSize];
        //--------    test        --------------
        assumeTrue(TEST_NAME, track.getState() == AudioTrack.STATE_INITIALIZED);
        track.write(data, 0, data.length);
        track.write(data, 0, data.length);
        //关键点2:播放
        track.play();
        assertTrue(TEST_NAME,
                track.setPlaybackHeadPosition(10) == AudioTrack.ERROR_INVALID_OPERATION);
        //-------- tear down      --------------
        track.release();
    }
    //...
}

通过上面的分析可以知道,Java层的AudioTrack的使用流程是:

  1. 构建参数并传递给AudioTrack。
  2. 执行wrtie写入数据,相当于 shared_mem_test中的iMem。
  3. 执行start操作开始播放。

接下来我们分析Java层AudioTrack 到C++层AudioTrack的调用过程,一个是构造器,一个是write函数。


3 分析Java层AudioTrack 到C++层AudioTrack

3.1 构造器分析之 Java层AudioTrack 到C++层AudioTrack

同时这里Java层AudioTrack对象的创建 最后也导致了Native层的AudioTrack对象的创建。分析Java层的AudioTrack,代码实现如下:

public AudioTrack(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,
        int mode, int sessionId)
                throws IllegalArgumentException {
	//...
    // native initialization
    int initResult = native_setup(new WeakReference<AudioTrack>(this), mAttributes,
            mSampleRate, mChannels, mAudioFormat,
            mNativeBufferSizeInBytes, mDataLoadMode, session);
    //...
    mSessionId = session[0];
    if (mDataLoadMode == MODE_STATIC) {
        mState = STATE_NO_STATIC_DATA;
    } else {
        mState = STATE_INITIALIZED;
    }
}

根据JNI的映射关系:

{"native_setup",     "(Ljava/lang/Object;Ljava/lang/Object;IIIII[I)I",
                                         (void *)android_media_AudioTrack_setup},

继续分析 android_media_AudioTrack_setup的实现,代码如下:

static jint
android_media_AudioTrack_setup(JNIEnv *env, jobject thiz, jobject weak_this,
        jobject jaa,
        jint sampleRateInHertz, jint javaChannelMask,
        jint audioFormat, jint buffSizeInBytes, jint memoryMode, jintArray jSession) {
	//...
    //关键点1:创建native AudioTrack对象
    sp<AudioTrack> lpTrack = new AudioTrack();
	//...
    switch (memoryMode) {//这里开始针对 两种模式MODE_STREAM 和 MODE_STATIC进行不同参数的设置
    case MODE_STREAM:
        //关键点2.1:set方法,设置参数
        status = lpTrack->set(
                AUDIO_STREAM_DEFAULT,// stream type, but more info conveyed in paa (last argument)
                sampleRateInHertz,
                format,// word length, PCM
                nativeChannelMask,
                frameCount,
                AUDIO_OUTPUT_FLAG_NONE,
                audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user)
                0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
                0,// shared mem
                true,// thread can call Java
                sessionId,// audio session ID
                AudioTrack::TRANSFER_SYNC,
                NULL,                         // default offloadInfo
                -1, -1,                       // default uid, pid values
                paa);
        break;
    case MODE_STATIC:
        //应用端申请共享内存
        if (!lpJniStorage->allocSharedMem(buffSizeInBytes)) {
            ALOGE("Error creating AudioTrack in static mode: error creating mem heap base");
            goto native_init_failure;
        }
        //关键点2.2:set方法,设置参数
        status = lpTrack->set(
                AUDIO_STREAM_DEFAULT,// stream type, but more info conveyed in paa (last argument)
                sampleRateInHertz,
                format,// word length, PCM
                nativeChannelMask,
                frameCount,
                AUDIO_OUTPUT_FLAG_NONE,
                audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user));
                0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
                lpJniStorage->mMemBase,// shared mem
                true,// thread can call Java
                sessionId,// audio session ID
                AudioTrack::TRANSFER_SHARED,
                NULL,                         // default offloadInfo
                -1, -1,                       // default uid, pid values
                paa);
        break;
	//...
    default:
        ALOGE("Unknown mode %d", memoryMode);
        goto native_init_failure;
    }
	//...
    return (jint) AUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;
}

这里执行了两个关键操作,一个是创建C++层的AudioTrack对象,另一个是执行它的set方法,而C++层的AudioTrack对象它的构造器代码如下:

AudioTrack::AudioTrack()//无参对象,后再调用set方法
    : mStatus(NO_INIT),
      mIsTimed(false),
      mPreviousPriority(ANDROID_PRIORITY_NORMAL),
      mPreviousSchedulingGroup(SP_DEFAULT),
      mPausedPosition(0)
{
    mAttributes.content_type = AUDIO_CONTENT_TYPE_UNKNOWN;
    mAttributes.usage = AUDIO_USAGE_UNKNOWN;
    mAttributes.flags = 0x0;
    strcpy(mAttributes.tags, "");
}

AudioTrack::AudioTrack(//有参对象,不需再调用set方法
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t frameCount,
        audio_output_flags_t flags,
        callback_t cbf,
        void* user,
        uint32_t notificationFrames,
        int sessionId,
        transfer_type transferType,
        const audio_offload_info_t *offloadInfo,
        int uid,
        pid_t pid,
        const audio_attributes_t* pAttributes)
    : mStatus(NO_INIT),
      mIsTimed(false),
      mPreviousPriority(ANDROID_PRIORITY_NORMAL),
      mPreviousSchedulingGroup(SP_DEFAULT),
      mPausedPosition(0)
{
    mStatus = set(streamType, sampleRate, format, channelMask,
            frameCount, flags, cbf, user, notificationFrames,
            0 /*sharedBuffer*/, false /*threadCanCallJava*/, sessionId, transferType,
            offloadInfo, uid, pid, pAttributes);
}

可以看到 AudioTrack的使用有两种情况:

  • 无参数构造器,后期需要再执行set方法 设置参数。
  • 有参数构造器,会直接调用set方法 设置参数。

总结下:播放声音时都要创建AudioTrack对象,java的AudioTrack对象创建时会导致c++的AudioTrack对象被创建;所以分析的核心是c++的AudioTrack类,创建AudioTrack时涉及一个重要函数: set。同时AudioTrack只能确定声音的属性,并不能确定声音从哪个设备播放。在第6章节 详细介绍set函数。

3.2 write函数之Java层AudioTrack 到C++层AudioTrack

这里的写函数有很多,如下:

public int write(byte[] audioData, int offsetInBytes, int sizeInBytes) {
    //...
    int ret = native_write_byte(audioData, offsetInBytes, sizeInBytes, mAudioFormat,
            true /*isBlocking*/);
    //...
    return ret;
}

public int write(short[] audioData, int offsetInShorts, int sizeInShorts) {
    //...
    int ret = native_write_short(audioData, offsetInShorts, sizeInShorts, mAudioFormat);
    //...
    return ret;
}

public int write(float[] audioData, int offsetInFloats, int sizeInFloats,
        @WriteMode int writeMode) {
    //...
    int ret = native_write_float(audioData, offsetInFloats, sizeInFloats, mAudioFormat,
            writeMode == WRITE_BLOCKING);
    //...
    return ret;
}

public int write(ByteBuffer audioData, int sizeInBytes,
        @WriteMode int writeMode) {
    int ret = 0;
    if (audioData.isDirect()) {
        ret = native_write_native_bytes(audioData,
                audioData.position(), sizeInBytes, mAudioFormat,
                writeMode == WRITE_BLOCKING);
    } else {
        ret = native_write_byte(NioUtils.unsafeArray(audioData),
                NioUtils.unsafeArrayOffset(audioData) + audioData.position(),
                sizeInBytes, mAudioFormat,
                writeMode == WRITE_BLOCKING);
    }
    return ret;
}

这里均为native层的调用,继续跟进,代码如下:

static jint android_media_AudioTrack_write_byte(JNIEnv *env,  jobject thiz,
                                                  jbyteArray javaAudioData,
                                                  jint offsetInBytes, jint sizeInBytes,
                                                  jint javaAudioFormat,
                                                  jboolean isWriteBlocking) {
    sp<AudioTrack> lpTrack = getAudioTrack(env, thiz);
    //...
    jint written = writeToTrack(lpTrack, javaAudioFormat, cAudioData, offsetInBytes, sizeInBytes,
            isWriteBlocking == JNI_TRUE /* blocking */);
    //...
    return written;
}

static jint android_media_AudioTrack_write_native_bytes(JNIEnv *env,  jobject thiz,
        jbyteArray javaBytes, jint byteOffset, jint sizeInBytes,
        jint javaAudioFormat, jboolean isWriteBlocking) {
    sp<AudioTrack> lpTrack = getAudioTrack(env, thiz);
    //...
    jint written = writeToTrack(lpTrack, javaAudioFormat, bytes.get(), byteOffset,
            sizeInBytes, isWriteBlocking == JNI_TRUE /* blocking */);
    return written;
}

static jint android_media_AudioTrack_write_short(JNIEnv *env,  jobject thiz,
                                                  jshortArray javaAudioData,
                                                  jint offsetInShorts, jint sizeInShorts,
                                                  jint javaAudioFormat) {
    sp<AudioTrack> lpTrack = getAudioTrack(env, thiz);
    //...
    jint written = writeToTrack(lpTrack, javaAudioFormat, (jbyte *)cAudioData,
                                offsetInShorts * sizeof(short), sizeInShorts * sizeof(short),
            true /*blocking write, legacy behavior*/);
    //...
    return written;
}

static jint android_media_AudioTrack_write_float(JNIEnv *env,  jobject thiz,
                                                  jfloatArray javaAudioData,
                                                  jint offsetInFloats, jint sizeInFloats,
                                                  jint javaAudioFormat,
                                                  jboolean isWriteBlocking) {

    sp<AudioTrack> lpTrack = getAudioTrack(env, thiz);
    //...
    jint written = writeToTrack(lpTrack, javaAudioFormat, (jbyte *)cAudioData,
                                offsetInFloats * sizeof(float), sizeInFloats * sizeof(float),
                                isWriteBlocking == JNI_TRUE /* blocking */);
    //...
    return written;
}

这里native相关函数最终都调用了writeToTrack函数,代码实现如下:

jint writeToTrack(const sp<AudioTrack>& track, jint audioFormat, const jbyte* data,
                  jint offsetInBytes, jint sizeInBytes, bool blocking = true) {
    ssize_t written = 0;
    //playbackthread提供共享内存,调用C++层track的write函数
    if (track->sharedBuffer() == 0) {
        written = track->write(data + offsetInBytes, sizeInBytes, blocking);
        if (written == (ssize_t) WOULD_BLOCK) {
            written = 0;
        }
    } else {//应用端 提供共享内存,直接执行memcpy
        const audio_format_t format = audioFormatToNative(audioFormat);
        switch (format) {

        default:
        case AUDIO_FORMAT_PCM_FLOAT:
        case AUDIO_FORMAT_PCM_16_BIT: {
            if ((size_t)sizeInBytes > track->sharedBuffer()->size()) {
                sizeInBytes = track->sharedBuffer()->size();
            }
            //这里将data数据拷贝给 共享内存
            memcpy(track->sharedBuffer()->pointer(), data + offsetInBytes, sizeInBytes);
            written = sizeInBytes;
            } break;

        case AUDIO_FORMAT_PCM_8_BIT: {
            //功能同上,只是8位需要中间的数据转换环节
            if (((size_t)sizeInBytes)*2 > track->sharedBuffer()->size()) {
                sizeInBytes = track->sharedBuffer()->size() / 2;
            }
            int count = sizeInBytes;
            int16_t *dst = (int16_t *)track->sharedBuffer()->pointer();
            const uint8_t *src = (const uint8_t *)(data + offsetInBytes);
            memcpy_to_i16_from_u8(dst, src, count);
            written = sizeInBytes;
            } break;
        }
    }
    return written;
}

接下来就调用 C++层 track的一些基本操作了,本段代码简要解读如下:

  1. 如果track->sharedBuffer() == 0,即由playbackthread提供共享内存,则执行C++层track的write方法。
  2. 如果track->sharedBuffer() != 0,即由APP端提供共享内存,则直接执行memcpy操作,给track->sharedBuffer()赋值。

在第7章节 详细介绍track的write方法

发布了289 篇原创文章 · 获赞 47 · 访问量 3万+

猜你喜欢

转载自blog.csdn.net/vviccc/article/details/105286614