Android Framework Audio Subsystem (05) AudioTrack Use Case

This series of articles Master link: Thematic sub-directory Android Framework Class Audio Subsystem


Summary and description of key points in this chapter:

This chapter focuses on ➕ The AudioTrack part on the upper left of the above mind map is sufficient. It mainly analyzes the two test procedures (shared_mem_test in the C ++ layer and MediaAudioTrackTest.java in the Java layer) and then analyzes the call process (constructor process and write process) from the Java layer AudioTrack to the C ++ layer AudioTrack.


Two test programs are used here, namely, shared_mem_test in the C ++ layer and MediaAudioTrackTest.java in the Java layer. The directories of these two test programs are:

  • frameworks/base/media/tests/audiotests/shared_mem_test.cpp
  • frameworks/base/media/tests/mediaframeworktest/src/com/android/mediaframeworktest/functional/audio/MediaAudioTrackTest.java

1 shared_mem_test test program analysis

The main function of the entrance of the shared_mem_test test program is:

int main(int argc, char *argv[]) {
    return android::main();
}

Continue to analyze, android :: main (), the code is implemented as follows:

int main() {
    ProcessState::self()->startThreadPool();
    AudioTrackTest *test;

    test = new AudioTrackTest();
    test->Execute();
    delete test;

    return 0;
}

Here we mainly analyze two key points: AudioTrackTest constructor and its Execute method.

@ 1 AudioTrackTest constructor

The AudioTrackTest constructor code is implemented as follows:

AudioTrackTest::AudioTrackTest(void) {
    InitSine();         // init sine table
}

Continue to analyze InitSine, the code implementation is as follows:

void AudioTrackTest::InitSine(void) {
    double phi = 0;
    double dPhi = 2 * M_PI / SIN_SZ;
    for(int i0 = 0; i0<SIN_SZ; i0++) {
        long d0;

        d0 = 32768. * sin(phi);
        phi += dPhi;
        if(d0 >= 32767) d0 = 32767;
        if(d0 <= -32768) d0 = -32768;
        sin1024[i0] = (short)d0;
    }
}

This is mainly to construct data and waveforms to prepare for the next step.

@ 2 AudioTrackTest's Execute method

The Execute method of AudioTrackTest is implemented as follows:

void AudioTrackTest::Execute(void) {
    if (Test01() == 0) {
        ALOGD("01 passed\n");
    } else {
        ALOGD("01 failed\n");
    }
}

Continue to analyze Test01, the code implementation is as follows:

int AudioTrackTest::Test01() {

    sp<MemoryDealer> heap;
    sp<IMemory> iMem;
    uint8_t* p;

    short smpBuf[BUF_SZ];
    long rate = 44100;
    unsigned long phi;
    unsigned long dPhi;
    long amplitude;
    long freq = 1237;
    float f0;

    f0 = pow(2., 32.) * freq / (float)rate;
    dPhi = (unsigned long)f0;
    amplitude = 1000;
    phi = 0;
    Generate(smpBuf, BUF_SZ, amplitude, phi, dPhi);  // fill buffer

    for (int i = 0; i < 1024; i++) {
        heap = new MemoryDealer(1024*1024, "AudioTrack Heap Base");

        iMem = heap->allocate(BUF_SZ*sizeof(short));

        p = static_cast<uint8_t*>(iMem->pointer());
        memcpy(p, smpBuf, BUF_SZ*sizeof(short));
        //关键点1:创建一个AudioTrack对象,并传递数据
        sp<AudioTrack> track = new AudioTrack(AUDIO_STREAM_MUSIC,// stream type
               rate,
               AUDIO_FORMAT_PCM_16_BIT,// word length, PCM
               AUDIO_CHANNEL_OUT_MONO,
               iMem);

        status_t status = track->initCheck();
        if(status != NO_ERROR) {
            track.clear();
            ALOGD("Failed for initCheck()");
            return -1;
        }

        //关键点2:开始播放
        track->start();
        usleep(20000);

        ALOGD("stop");
        track->stop();
        iMem.clear();
        heap.clear();
        usleep(20000);
    }
    return 0;
}

Through the above analysis, we can know that the use process of the AudioTrack of the Native layer is:

  1. Construct parameters and pass them to AudioTrack. Note that here iMem contains Buffer with data
  2. Perform start operation to start playback

2 MediaAudioTrackTest test program analysis

There are many test cases in MediaAudioTrackTest, one of which is analyzed here, the code implementation is as follows:

public class MediaAudioTrackTest extends ActivityInstrumentationTestCase2<MediaFrameworkTest> {    
    private String TAG = "MediaAudioTrackTest";
    //...
    //Test case 1: setPlaybackHeadPosition() on playing track
    @LargeTest
    public void testSetPlaybackHeadPositionPlaying() throws Exception {
        // constants for test
        final String TEST_NAME = "testSetPlaybackHeadPositionPlaying";
        final int TEST_SR = 22050;
        final int TEST_CONF = AudioFormat.CHANNEL_OUT_MONO;
        final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
        final int TEST_MODE = AudioTrack.MODE_STREAM;
        final int TEST_STREAM_TYPE = AudioManager.STREAM_MUSIC;
        
        //-------- initialization --------------
        int minBuffSize = AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT);
        //关键点1:构建并传递参数给AudioTrack对象
        AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT, 
                2*minBuffSize, TEST_MODE);
        byte data[] = new byte[minBuffSize];
        //--------    test        --------------
        assumeTrue(TEST_NAME, track.getState() == AudioTrack.STATE_INITIALIZED);
        track.write(data, 0, data.length);
        track.write(data, 0, data.length);
        //关键点2:播放
        track.play();
        assertTrue(TEST_NAME,
                track.setPlaybackHeadPosition(10) == AudioTrack.ERROR_INVALID_OPERATION);
        //-------- tear down      --------------
        track.release();
    }
    //...
}

Through the above analysis, we can know that the use process of AudioTrack in the Java layer is:

  1. Construct parameters and pass to AudioTrack.
  2. Performing wrtie to write data is equivalent to iMem in shared_mem_test.
  3. Perform the start operation to start playback.

Next we analyze the calling process of AudioTrack from Java layer to Audio Track of C ++ layer, one is the constructor and the other is the write function.


3 Analysis of AudioTrack from Java layer to AudioTrack of C ++ layer

3.1 Constructor analysis of Java layer AudioTrack to C ++ layer AudioTrack

At the same time, the creation of the AudioTrack object in the Java layer finally leads to the creation of the AudioTrack object in the Native layer. Analyze the AudioTrack in the Java layer. The code implementation is as follows:

public AudioTrack(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,
        int mode, int sessionId)
                throws IllegalArgumentException {
	//...
    // native initialization
    int initResult = native_setup(new WeakReference<AudioTrack>(this), mAttributes,
            mSampleRate, mChannels, mAudioFormat,
            mNativeBufferSizeInBytes, mDataLoadMode, session);
    //...
    mSessionId = session[0];
    if (mDataLoadMode == MODE_STATIC) {
        mState = STATE_NO_STATIC_DATA;
    } else {
        mState = STATE_INITIALIZED;
    }
}

According to the mapping relationship of JNI:

{"native_setup",     "(Ljava/lang/Object;Ljava/lang/Object;IIIII[I)I",
                                         (void *)android_media_AudioTrack_setup},

Continue to analyze the implementation of android_media_AudioTrack_setup, the code is as follows:

static jint
android_media_AudioTrack_setup(JNIEnv *env, jobject thiz, jobject weak_this,
        jobject jaa,
        jint sampleRateInHertz, jint javaChannelMask,
        jint audioFormat, jint buffSizeInBytes, jint memoryMode, jintArray jSession) {
	//...
    //关键点1:创建native AudioTrack对象
    sp<AudioTrack> lpTrack = new AudioTrack();
	//...
    switch (memoryMode) {//这里开始针对 两种模式MODE_STREAM 和 MODE_STATIC进行不同参数的设置
    case MODE_STREAM:
        //关键点2.1:set方法,设置参数
        status = lpTrack->set(
                AUDIO_STREAM_DEFAULT,// stream type, but more info conveyed in paa (last argument)
                sampleRateInHertz,
                format,// word length, PCM
                nativeChannelMask,
                frameCount,
                AUDIO_OUTPUT_FLAG_NONE,
                audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user)
                0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
                0,// shared mem
                true,// thread can call Java
                sessionId,// audio session ID
                AudioTrack::TRANSFER_SYNC,
                NULL,                         // default offloadInfo
                -1, -1,                       // default uid, pid values
                paa);
        break;
    case MODE_STATIC:
        //应用端申请共享内存
        if (!lpJniStorage->allocSharedMem(buffSizeInBytes)) {
            ALOGE("Error creating AudioTrack in static mode: error creating mem heap base");
            goto native_init_failure;
        }
        //关键点2.2:set方法,设置参数
        status = lpTrack->set(
                AUDIO_STREAM_DEFAULT,// stream type, but more info conveyed in paa (last argument)
                sampleRateInHertz,
                format,// word length, PCM
                nativeChannelMask,
                frameCount,
                AUDIO_OUTPUT_FLAG_NONE,
                audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user));
                0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
                lpJniStorage->mMemBase,// shared mem
                true,// thread can call Java
                sessionId,// audio session ID
                AudioTrack::TRANSFER_SHARED,
                NULL,                         // default offloadInfo
                -1, -1,                       // default uid, pid values
                paa);
        break;
	//...
    default:
        ALOGE("Unknown mode %d", memoryMode);
        goto native_init_failure;
    }
	//...
    return (jint) AUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;
}

Two key operations are performed here, one is to create the AudioTrack object of the C ++ layer, and the other is to execute its set method, and the constructor code of the AudioTrack object of the C ++ layer is as follows:

AudioTrack::AudioTrack()//无参对象,后再调用set方法
    : mStatus(NO_INIT),
      mIsTimed(false),
      mPreviousPriority(ANDROID_PRIORITY_NORMAL),
      mPreviousSchedulingGroup(SP_DEFAULT),
      mPausedPosition(0)
{
    mAttributes.content_type = AUDIO_CONTENT_TYPE_UNKNOWN;
    mAttributes.usage = AUDIO_USAGE_UNKNOWN;
    mAttributes.flags = 0x0;
    strcpy(mAttributes.tags, "");
}

AudioTrack::AudioTrack(//有参对象,不需再调用set方法
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t frameCount,
        audio_output_flags_t flags,
        callback_t cbf,
        void* user,
        uint32_t notificationFrames,
        int sessionId,
        transfer_type transferType,
        const audio_offload_info_t *offloadInfo,
        int uid,
        pid_t pid,
        const audio_attributes_t* pAttributes)
    : mStatus(NO_INIT),
      mIsTimed(false),
      mPreviousPriority(ANDROID_PRIORITY_NORMAL),
      mPreviousSchedulingGroup(SP_DEFAULT),
      mPausedPosition(0)
{
    mStatus = set(streamType, sampleRate, format, channelMask,
            frameCount, flags, cbf, user, notificationFrames,
            0 /*sharedBuffer*/, false /*threadCanCallJava*/, sessionId, transferType,
            offloadInfo, uid, pid, pAttributes);
}

You can see that there are two situations for the use of AudioTrack:

  • No parameter constructor, later need to execute the set method to set parameters.
  • There is a parameter constructor, which will directly call the set method to set the parameters.

To sum up: AudioTrack objects must be created when playing sound. Java AudioTrack objects will cause c ++ AudioTrack objects to be created; so the core of the analysis is the C ++ AudioTrack class. An important function is involved when creating AudioTrack: set. At the same time, AudioTrack can only determine the attributes of the sound, but not from which device the sound is played. The set function is introduced in detail in Chapter 6.

3.2 Java layer AudioTrack of write function to C ++ layer AudioTrack

There are many writing functions here, as follows:

public int write(byte[] audioData, int offsetInBytes, int sizeInBytes) {
    //...
    int ret = native_write_byte(audioData, offsetInBytes, sizeInBytes, mAudioFormat,
            true /*isBlocking*/);
    //...
    return ret;
}

public int write(short[] audioData, int offsetInShorts, int sizeInShorts) {
    //...
    int ret = native_write_short(audioData, offsetInShorts, sizeInShorts, mAudioFormat);
    //...
    return ret;
}

public int write(float[] audioData, int offsetInFloats, int sizeInFloats,
        @WriteMode int writeMode) {
    //...
    int ret = native_write_float(audioData, offsetInFloats, sizeInFloats, mAudioFormat,
            writeMode == WRITE_BLOCKING);
    //...
    return ret;
}

public int write(ByteBuffer audioData, int sizeInBytes,
        @WriteMode int writeMode) {
    int ret = 0;
    if (audioData.isDirect()) {
        ret = native_write_native_bytes(audioData,
                audioData.position(), sizeInBytes, mAudioFormat,
                writeMode == WRITE_BLOCKING);
    } else {
        ret = native_write_byte(NioUtils.unsafeArray(audioData),
                NioUtils.unsafeArrayOffset(audioData) + audioData.position(),
                sizeInBytes, mAudioFormat,
                writeMode == WRITE_BLOCKING);
    }
    return ret;
}

Here are the native layer calls, continue to follow up, the code is as follows:

static jint android_media_AudioTrack_write_byte(JNIEnv *env,  jobject thiz,
                                                  jbyteArray javaAudioData,
                                                  jint offsetInBytes, jint sizeInBytes,
                                                  jint javaAudioFormat,
                                                  jboolean isWriteBlocking) {
    sp<AudioTrack> lpTrack = getAudioTrack(env, thiz);
    //...
    jint written = writeToTrack(lpTrack, javaAudioFormat, cAudioData, offsetInBytes, sizeInBytes,
            isWriteBlocking == JNI_TRUE /* blocking */);
    //...
    return written;
}

static jint android_media_AudioTrack_write_native_bytes(JNIEnv *env,  jobject thiz,
        jbyteArray javaBytes, jint byteOffset, jint sizeInBytes,
        jint javaAudioFormat, jboolean isWriteBlocking) {
    sp<AudioTrack> lpTrack = getAudioTrack(env, thiz);
    //...
    jint written = writeToTrack(lpTrack, javaAudioFormat, bytes.get(), byteOffset,
            sizeInBytes, isWriteBlocking == JNI_TRUE /* blocking */);
    return written;
}

static jint android_media_AudioTrack_write_short(JNIEnv *env,  jobject thiz,
                                                  jshortArray javaAudioData,
                                                  jint offsetInShorts, jint sizeInShorts,
                                                  jint javaAudioFormat) {
    sp<AudioTrack> lpTrack = getAudioTrack(env, thiz);
    //...
    jint written = writeToTrack(lpTrack, javaAudioFormat, (jbyte *)cAudioData,
                                offsetInShorts * sizeof(short), sizeInShorts * sizeof(short),
            true /*blocking write, legacy behavior*/);
    //...
    return written;
}

static jint android_media_AudioTrack_write_float(JNIEnv *env,  jobject thiz,
                                                  jfloatArray javaAudioData,
                                                  jint offsetInFloats, jint sizeInFloats,
                                                  jint javaAudioFormat,
                                                  jboolean isWriteBlocking) {

    sp<AudioTrack> lpTrack = getAudioTrack(env, thiz);
    //...
    jint written = writeToTrack(lpTrack, javaAudioFormat, (jbyte *)cAudioData,
                                offsetInFloats * sizeof(float), sizeInFloats * sizeof(float),
                                isWriteBlocking == JNI_TRUE /* blocking */);
    //...
    return written;
}

Here the native related functions finally call the writeToTrack function, the code implementation is as follows:

jint writeToTrack(const sp<AudioTrack>& track, jint audioFormat, const jbyte* data,
                  jint offsetInBytes, jint sizeInBytes, bool blocking = true) {
    ssize_t written = 0;
    //playbackthread提供共享内存,调用C++层track的write函数
    if (track->sharedBuffer() == 0) {
        written = track->write(data + offsetInBytes, sizeInBytes, blocking);
        if (written == (ssize_t) WOULD_BLOCK) {
            written = 0;
        }
    } else {//应用端 提供共享内存,直接执行memcpy
        const audio_format_t format = audioFormatToNative(audioFormat);
        switch (format) {

        default:
        case AUDIO_FORMAT_PCM_FLOAT:
        case AUDIO_FORMAT_PCM_16_BIT: {
            if ((size_t)sizeInBytes > track->sharedBuffer()->size()) {
                sizeInBytes = track->sharedBuffer()->size();
            }
            //这里将data数据拷贝给 共享内存
            memcpy(track->sharedBuffer()->pointer(), data + offsetInBytes, sizeInBytes);
            written = sizeInBytes;
            } break;

        case AUDIO_FORMAT_PCM_8_BIT: {
            //功能同上,只是8位需要中间的数据转换环节
            if (((size_t)sizeInBytes)*2 > track->sharedBuffer()->size()) {
                sizeInBytes = track->sharedBuffer()->size() / 2;
            }
            int count = sizeInBytes;
            int16_t *dst = (int16_t *)track->sharedBuffer()->pointer();
            const uint8_t *src = (const uint8_t *)(data + offsetInBytes);
            memcpy_to_i16_from_u8(dst, src, count);
            written = sizeInBytes;
            } break;
        }
    }
    return written;
}

Next, I will call some basic operations of the C ++ layer track. This code is briefly interpreted as follows:

  1. If track-> sharedBuffer () == 0, that is, shared memory is provided by playbackthread, the write method of C ++ layer track is executed.
  2. If track-> sharedBuffer ()! = 0, that is, the shared memory is provided by the APP side, then directly execute the memcpy operation and assign a value to track-> sharedBuffer ().

In chapter 7 we introduce the track write method in detail .

 

 

 

 

Published 289 original articles · praised 47 · 30,000+ views

Guess you like

Origin blog.csdn.net/vviccc/article/details/105286614