Android Framework audio subsystem (07) AudioTrack data transfer

This series of articles Master link: Thematic sub-directory Android Framework Class Audio Subsystem


Summary and description of key points in this chapter:

This chapter mainly focuses on the sub-branch data transfer analysis of the AudioTrack part of the process analysis in the upper left of the above mind map. This chapter mainly analyzes the two modes of AudioTrack and how to establish shared memory between the AudioTrack of APP and the track of mTracks in the playbackThread.


1 Create shared memory on AudioTrack

1.1 Two modes of AudioTrack

APP creates AudioTrack, which will correspond to the Track created by PlaybackThread in AudioFlinger. There are two modes for APP to provide audio data to AudioTrack: one-time provision (MODE_STATIC mode), while playing (MODE_STREAM mode). The shared memory method is adopted. The two modes are different in two ways:

@ 1 2 modes of shared memory

  1. MODE_STATIC mode: Provide data in advance once. Use the APP to create shared memory, and the APP will fill the data at once. The data such as playbackthread is constructed, and the data can be used directly without data synchronization. The corresponding playbackThread work is: Obtaining the obtainBuffer containing data (APP may submit a lot of data in shared memory at one time, and playbackThread needs to be played multiple times) After completion, release the buffer.
  2. MODE_STREAM mode: Provide while playing. Use playbackThread to create shared memory. The APP uses obtainBuffer to obtain the buffer, and after filling the data, releaseBuffer is used to release the buffer. playbackThread also needs to be played multiple times. However, the ring buffer mechanism is used here to continuously transfer data. After completion, release the buffer.

1.2 AudioTrack constructor review

Next, we start to analyze the code based on this. Here we start to analyze the two modes separately, starting with the creation of the AudioTrack object in the previous Java layer, which later leads to the creation of the AudioTrack object in the Native layer. Analyze the AudioTrack of the Java layer. According to the analysis in the previous section, we review the call stack:

Java::AudioTrack->Java::native_setup->JNI转换->android_media_AudioTrack_setup

We continue to analyze from the implementation of android_media_AudioTrack_setup, the code is as follows:

static jint
android_media_AudioTrack_setup(JNIEnv *env, jobject thiz, jobject weak_this,
        jobject jaa,
        jint sampleRateInHertz, jint javaChannelMask,
        jint audioFormat, jint buffSizeInBytes, jint memoryMode, jintArray jSession) {
	//...
    //关键点1:创建native AudioTrack对象
    sp<AudioTrack> lpTrack = new AudioTrack();
	//...
    switch (memoryMode) {//这里开始针对 两种模式MODE_STREAM 和 MODE_STATIC进行不同参数的设置
    case MODE_STREAM:
        //关键点2.1:set方法,设置参数
        //注意:这里APP不分配内存而是在后面的playbackthread中分配
        //因此share memory的值为“空”
        status = lpTrack->set(
                AUDIO_STREAM_DEFAULT,// stream type, but more info conveyed in paa (last argument)
                sampleRateInHertz,
                format,// word length, PCM
                nativeChannelMask,
                frameCount,
                AUDIO_OUTPUT_FLAG_NONE,
                audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user)
                0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
                0,// shared mem,
                true,// thread can call Java
                sessionId,// audio session ID
                AudioTrack::TRANSFER_SYNC,
                NULL,                         // default offloadInfo
                -1, -1,                       // default uid, pid values
                paa);
        break;
    case MODE_STATIC:
        //应用端申请共享内存
        if (!lpJniStorage->allocSharedMem(buffSizeInBytes)) {
            ALOGE("Error creating AudioTrack in static mode: error creating mem heap base");
            goto native_init_failure;
        }
        //关键点2.2:set方法,设置参数
        //因此share memory的值为 应用端申请共享内存 首地址
        status = lpTrack->set(
                AUDIO_STREAM_DEFAULT,// stream type, but more info conveyed in paa (last argument)
                sampleRateInHertz,
                format,// word length, PCM
                nativeChannelMask,
                frameCount,
                AUDIO_OUTPUT_FLAG_NONE,
                audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user));
                0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
                lpJniStorage->mMemBase,// shared mem
                true,// thread can call Java
                sessionId,// audio session ID
                AudioTrack::TRANSFER_SHARED,
                NULL,                         // default offloadInfo
                -1, -1,                       // default uid, pid values
                paa);
        break;
	//...
    default:
        ALOGE("Unknown mode %d", memoryMode);
        goto native_init_failure;
    }
	//...
    return (jint) AUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;
}

In summary, the setting of the mode here originated in the Java layer, when entering the C ++ layer:

  1. If the mode is MODE_STATIC: first apply for memory (allocSharedMem method), and then execute the set method.
  2. If the mode is MODE_STREAM: directly execute the set method, and the playbackthread will apply for it behind the memory.

1.3 Analysis of shared memory operation in AudioTrack

Based on the analysis in the previous section, we review the call stack:

AudioTrack::set->AudioTrack::createTrack_l

In the createTrack_l function, the operations related to shared memory are:

status_t AudioTrack::createTrack_l()
{
    const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();
    //...
    // Starting address of buffers in shared memory.  If there is a shared buffer, buffers
    // is the value of pointer() for the shared buffer, otherwise buffers points
    // immediately after the control block.  This address is for the mapping within client
    // address space.  AudioFlinger::TrackBase::mBuffer is for the server address space.
    void* buffers;
    if (mSharedBuffer == 0) {
        //指向 playbackthread提供的Buffer
        buffers = (char*)cblk + sizeof(audio_track_cblk_t);
    } else {
        //指向应用端APP提供的Buffer
        buffers = mSharedBuffer->pointer();
    }
    //...
    /* update proxy,APP的AudioTack 与Thread中的Track建立共享内存,这里的
     * AudioTrackClientProxy和StaticAudioTrackClientProxy用来管理Buffer
     */
    if (mSharedBuffer == 0) {
        mStaticProxy.clear();
        mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
    } else {
        mStaticProxy = new StaticAudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
        mProxy = mStaticProxy;
    }
    //...

    return NO_ERROR;
    }
    //...
    return status;
}

According to the analysis in the previous section, we know that the creation of the AudioTrack on the APP side necessarily means the creation of the Track in AudioFlinger :: PlaybackThread. So next focus on analyzing the Track object of PlaybackThread.


2 PlaybackThread Track establishes shared memory

Track here is inherited from TrackBase, and there are important contents about shared memory management that we need to analyze in TrackBase. Here we mainly focus on sharedBuffer.

// TrackBase constructor must be called with AudioFlinger::mLock held
AudioFlinger::ThreadBase::TrackBase::TrackBase(
            ThreadBase *thread,
            const sp<Client>& client,
            //...
            alloc_type alloc,
            track_type type)
    :   RefBase(),
        mThread(thread),
        mClient(client),
        mCblk(NULL),
        //...
{
    // if the caller is us, trust the specified uid
    if (IPCThreadState::self()->getCallingPid() != getpid_cached || clientUid == -1) {
        int newclientUid = IPCThreadState::self()->getCallingUid();
        if (clientUid != -1 && clientUid != newclientUid) {
            ALOGW("uid %d tried to pass itself off as %d", newclientUid, clientUid);
        }
        clientUid = newclientUid;
    }
	
    mUid = clientUid;
    size_t size = sizeof(audio_track_cblk_t);//头部结构体
    size_t bufferSize = (buffer == NULL ? roundup(frameCount) : frameCount) * mFrameSize;
    if (buffer == NULL && alloc == ALLOC_CBLK) {
        size += bufferSize;
    }

    if (client != 0) {
        //若APP端提供Buffer,则这里只分配CBLK,否则分配CBLK+Buffer
        mCblkMemory = client->heap()->allocate(size);
        if (mCblkMemory == 0 ||
                (mCblk = static_cast<audio_track_cblk_t *>(mCblkMemory->pointer())) == NULL) {
            client->heap()->dump("AudioTrack");
            mCblkMemory.clear();
            return;
        }
    } else {
        // this syntax avoids calling the audio_track_cblk_t constructor twice
        mCblk = (audio_track_cblk_t *) new uint8_t[size];
        // assume mCblk != NULL
    }

    // construct the shared structure in-place.
    if (mCblk != NULL) {
        new(mCblk) audio_track_cblk_t();
        switch (alloc) {
        case ALLOC_READONLY: {
            const sp<MemoryDealer> roHeap(thread->readOnlyHeap());
            if (roHeap == 0 ||
                    (mBufferMemory = roHeap->allocate(bufferSize)) == 0 ||
                    (mBuffer = mBufferMemory->pointer()) == NULL) {
                ALOGE("not enough memory for read-only buffer size=%zu", bufferSize);
                if (roHeap != 0) {
                    roHeap->dump("buffer");
                }
                mCblkMemory.clear();
                mBufferMemory.clear();
                return;
            }
            memset(mBuffer, 0, bufferSize);
            } break;
        case ALLOC_PIPE:
            mBufferMemory = thread->pipeMemory();
            mBuffer = NULL;
            break;
        case ALLOC_CBLK:
            // buffer的初始化
            if (buffer == NULL) {//指向playbackthread提供的Buffer
                mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t);
                memset(mBuffer, 0, bufferSize);
            } else {
                mBuffer = buffer;//指向APP提供的Buffer
            }
            break;
        case ALLOC_LOCAL:
            mBuffer = calloc(1, bufferSize);
            break;
        case ALLOC_NONE:
            mBuffer = buffer;
            break;
        }
    }
}

The above is mainly the creation and distribution of Buffer. Next, analyze the implementation of Track, the code is as follows:

AudioFlinger::PlaybackThread::Track::Track(
            PlaybackThread *thread,
            const sp<Client>& client,
            //...
            const sp<IMemory>& sharedBuffer,
            //...
            track_type type)
    :   TrackBase(thread, client, sampleRate, format, channelMask, frameCount,
                  (sharedBuffer != 0) ? sharedBuffer->pointer() : buffer,
                  sessionId, uid, flags, true /*isOut*/,
                  (type == TYPE_PATCH) ? ( buffer == NULL ? ALLOC_LOCAL : ALLOC_NONE) : ALLOC_CBLK,
                  type),
    mFillingUpStatus(FS_INVALID),
    // mRetryCount initialized later when needed
    mSharedBuffer(sharedBuffer),
    mStreamType(streamType),
    //...
{
    //共享内存相关代码,这里的AudioTrackServerProxy和StaticAudioTrackServerProxy用来管理Buffer
    if (sharedBuffer == 0) {
        mAudioTrackServerProxy = new AudioTrackServerProxy(mCblk, mBuffer, frameCount,
                mFrameSize, !isExternalTrack(), sampleRate);
    } else {
        mAudioTrackServerProxy = new StaticAudioTrackServerProxy(mCblk, mBuffer, frameCount,
                mFrameSize);
    }
    mServerProxy = mAudioTrackServerProxy;
    mName = thread->getTrackName_l(channelMask, format, sessionId);
    //...
    // only allocate a fast track index if we were able to allocate a normal track name
    if (flags & IAudioFlinger::TRACK_FAST) {
        mAudioTrackServerProxy->framesReadyIsCalledByMultipleThreads();
        int i = __builtin_ctz(thread->mFastTrackAvailMask);
        mFastIndex = i;
        // Read the initial underruns because this field is never cleared by the fast mixer
        mObservedUnderruns = thread->getFastTrackUnderruns(i);
        thread->mFastTrackAvailMask &= ~(1 << i);
    }
}

To sum up:

  1. AudioTrackClientProxy object and StaticAudioTrackClientProxy object are used in AudioTrack to manage shared memory.
  2. Track uses AudioTrackServerProxy object and StaticAudioTrackServerProxy object to manage shared memory.

3 Transmission of audio data

The transmission of audio data is achieved through AudioTack's write method, and the analysis of the track's write method is continued based on the analysis stack in Chapter 5. The previous code stack is as follows:

Java层AudioTrack.write->native_write_XXX->writeToTrack
->C++层track->sharedBuffer() 或 C++层track.write

The code of writeToTrack here is implemented as follows:

jint writeToTrack(const sp<AudioTrack>& track, jint audioFormat, const jbyte* data,
                  jint offsetInBytes, jint sizeInBytes, bool blocking = true) {
    ssize_t written = 0;
    //playbackthread提供共享内存,调用C++层track的write函数
    if (track->sharedBuffer() == 0) {
        written = track->write(data + offsetInBytes, sizeInBytes, blocking);
        if (written == (ssize_t) WOULD_BLOCK) {
            written = 0;
        }
    } else {//应用端 提供共享内存,直接执行memcpy
        const audio_format_t format = audioFormatToNative(audioFormat);
        switch (format) {

        default:
        case AUDIO_FORMAT_PCM_FLOAT:
        case AUDIO_FORMAT_PCM_16_BIT: {
            if ((size_t)sizeInBytes > track->sharedBuffer()->size()) {
                sizeInBytes = track->sharedBuffer()->size();
            }
            //这里将data数据拷贝给 共享内存
            memcpy(track->sharedBuffer()->pointer(), data + offsetInBytes, sizeInBytes);
            written = sizeInBytes;
            } break;

        case AUDIO_FORMAT_PCM_8_BIT: {
            //功能同上,只是8位需要中间的数据转换环节
            if (((size_t)sizeInBytes)*2 > track->sharedBuffer()->size()) {
                sizeInBytes = track->sharedBuffer()->size() / 2;
            }
            int count = sizeInBytes;
            int16_t *dst = (int16_t *)track->sharedBuffer()->pointer();
            const uint8_t *src = (const uint8_t *)(data + offsetInBytes);
            memcpy_to_i16_from_u8(dst, src, count);
            written = sizeInBytes;
            } break;
        }
    }
    return written;
}

Also review the following instructions:

  1. If track-> sharedBuffer () == 0, that is, shared memory is provided by playbackthread, the write method of C ++ layer track is executed.
  2. If track-> sharedBuffer ()! = 0, that is, the shared memory is provided by the APP side, then directly execute the memcpy operation and assign a value to track-> sharedBuffer ().

3.1 Data transfer process in MODE_STREAM mode

@ 1 Client proxy process

Continue to analyze the track write method here, the code is as follows:

ssize_t AudioTrack::write(const void* buffer, size_t userSize, bool blocking)
{
    //...
    size_t written = 0;
    Buffer audioBuffer;

    while (userSize >= mFrameSize) {
        audioBuffer.frameCount = userSize / mFrameSize;
        //关键点1:获取共享内存Buffer
        status_t err = obtainBuffer(&audioBuffer,
                blocking ? &ClientProxy::kForever : &ClientProxy::kNonBlocking);
        //...
        size_t toWrite;
        //buffer拷贝数据到audioBuffer中
        if (mFormat == AUDIO_FORMAT_PCM_8_BIT && !(mFlags & AUDIO_OUTPUT_FLAG_DIRECT)) {
            toWrite = audioBuffer.size >> 1;
            memcpy_to_i16_from_u8(audioBuffer.i16, (const uint8_t *) buffer, toWrite);
        } else {
            toWrite = audioBuffer.size;
            memcpy(audioBuffer.i8, buffer, toWrite);
        }
        //计算剩余数据
        buffer = ((const char *) buffer) + toWrite;
        userSize -= toWrite;
        written += toWrite;
        //关键点2:释放Buffer
        releaseBuffer(&audioBuffer);
    }
    //释放共享内存
    return written;
}

Continue to analyze the realization of obtainBuffer, the code is as follows:

status_t AudioTrack::obtainBuffer(Buffer* audioBuffer, int32_t waitCount)
{
    ...//参数转换跟计算
    return obtainBuffer(audioBuffer, requested);
}

Continue to analyze the realization of the obtainBuffer after the conversion parameters, the code is as follows:

status_t AudioTrack::obtainBuffer(Buffer* audioBuffer, const struct timespec *requested,
        struct timespec *elapsed, size_t *nonContig)
{
    ...//参数转换
    status = proxy->obtainBuffer(&buffer, requested, elapsed);
    ...//结果的填充
}

For MODE_STREAM mode, because mSharedBuffer == 0, the proxy here is AudioTrackClientProxy

status_t AudioTrack::createTrack_l(){
   //...
    void* buffers;
    if (mSharedBuffer == 0) {
        buffers = (char*)cblk + sizeof(audio_track_cblk_t);
    } else {
        buffers = mSharedBuffer->pointer();
    }
	//...
    // update proxy
    if (mSharedBuffer == 0) {
        mStaticProxy.clear();
        mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
    } else {
        mStaticProxy = new StaticAudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
        mProxy = mStaticProxy;
    }
    //...
}

There is no obtainBuffer method in AudioTrackClientProxy, in fact, here is to call the obtainBuffer method in its parent class ClientProxy, that is, to obtain a blank Buffer through ClientProxy, and then write the audio data to Buffer, and finally releaseBuffer.

@ 2 server process

Here the Track's getNextBuffer (Get Buffer) method is used as the entrance analysis:

// AudioBufferProvider interface
status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(
        AudioBufferProvider::Buffer* buffer, int64_t pts __unused)
{
    ServerProxy::Buffer buf;
    size_t desiredFrames = buffer->frameCount;
    buf.mFrameCount = desiredFrames;
    //这里调用mServerProxy的obtainBuffer方法
    status_t status = mServerProxy->obtainBuffer(&buf);
    buffer->frameCount = buf.mFrameCount;
    buffer->raw = buf.mRaw;
    if (buf.mFrameCount == 0) {
        mAudioTrackServerProxy->tallyUnderrunFrames(desiredFrames);
    }
    return status;
}

Here continue to analyze the obtainBuffer method of mServerProxy (mServerProxy = AudioTrackServerProxy in MODE_STREAM mode), and there is no obtainBuffer method in AudioTrackServerProxy, here is to call the obtainBuffer method in its parent class ServerProxy to obtain a Buffer with data. Note: Finally, releaseBuffer is called directly through the destructor of TrackBase. TrackBase's destructor code is as follows:

AudioFlinger::ThreadBase::TrackBase::~TrackBase()
{
    // delete the proxy before deleting the shared memory it refers to, to avoid dangling reference
    delete mServerProxy;
    if (mCblk != NULL) {
        if (mClient == 0) {
            delete mCblk;
        } else {
            mCblk->~audio_track_cblk_t();   // destroy our shared-structure.
        }
    }
    mCblkMemory.clear();    // free the shared memory before releasing the heap it belongs to
    if (mClient != 0) {
        // Client destructor must run with AudioFlinger client mutex locked
        Mutex::Autolock _l(mClient->audioFlinger()->mClientLock);
        // If the client's reference count drops to zero, the associated destructor
        // must run with AudioFlinger lock held. Thus the explicit clear() rather than
        // relying on the automatic clear() at end of scope.
        mClient.clear();
    }
    // flush the binder command buffer
    IPCThreadState::self()->flushCommands();
}

In other words, there is no need to release shared memory by calling releaseBuffer.

@ 3 Data synchronization

MODE_STREAM mode will use the ring buffer to synchronize data, a production data, a consumption data, this time using the ring buffer is the most reliable. In fact, it is the collaboration between ClientProxy :: obtainBuffer, ClientProxy :: releaseBuffer, ServerProxy :: obtainBuffer, ServerProxy :: releaseBuffer. Here is a brief description of the logic of the ring buffer to understand the principles. The audio stream data is divided into two parts, the data header and the data itself, as follows:

At the same time, for the ring buffer, there are several key variables: mFront (read pointer R) , mRear (write pointer W) , mFrameCount (data length LEN) , mFrameCountP2 (data length LEN, take the Nth power of 2) . Next, the logic of the ring buffer will be explained in the form of examples and pseudocode. As follows:

The logical processing of the ring buffer here is as follows:

环形缓冲区:初始R=0,W=0,buf长度为LEN
写入一个数据流程:w=W%LEN ; buf[w] = data ; W++ ;
读取一个数据流程:r=R%LEN ; buf[r] = data ; R++;
判断 环形缓冲区为空:R==W
判断 满:W-R == LEN

Note here: mathematically when LEN is 2 to the Nth power, the following operations are equivalent:

w=W%LEN 等价于 w=W &(LEN-1)
r=R%LEN 等价于 r=R &(LEN-1)

3.2 Data transfer process in MODE_STATIC mode

@ 1 Client process

The previous native layer code writeToTrack method will directly write data to track-> sharedBuffer ()-> pointer (). For MODE_STATIC mode, because mSharedBuffer! = 0, the proxy here is StaticAudioTrackClientProxy.

status_t AudioTrack::createTrack_l(){
   //...
    void* buffers;
    if (mSharedBuffer == 0) {
        buffers = (char*)cblk + sizeof(audio_track_cblk_t);
    } else {
        buffers = mSharedBuffer->pointer();
    }
	//...
    // update proxy
    if (mSharedBuffer == 0) {
        mStaticProxy.clear();
        mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
    } else {
        mStaticProxy = new StaticAudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
        mProxy = mStaticProxy;
    }
    //...
}

There is no obtainBuffer method in StaticAudioTrackClientProxy, in fact, here is to call the obtainBuffer method in its parent class ClientProxy, which is to obtain a blank Buffer through ClientProxy, and then write the audio data to Buffer, and finally releaseBuffer.

@ 2 server process

Here still use the getNextBuffer (Get Buffer) method of Track above as the entrance analysis:

// AudioBufferProvider interface
status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(
        AudioBufferProvider::Buffer* buffer, int64_t pts __unused)
{
    ServerProxy::Buffer buf;
    size_t desiredFrames = buffer->frameCount;
    buf.mFrameCount = desiredFrames;
    //这里调用mServerProxy的obtainBuffer方法
    status_t status = mServerProxy->obtainBuffer(&buf);
    buffer->frameCount = buf.mFrameCount;
    buffer->raw = buf.mRaw;
    if (buf.mFrameCount == 0) {
        mAudioTrackServerProxy->tallyUnderrunFrames(desiredFrames);
    }
    return status;
}

Here continue to analyze the obtainBuffer method of mServerProxy (mServerProxy = StaticAudioTrackServerProxy in MODE_STATIC mode), and StaticAudioTrackServerProxy rewrites the obtainBuffer method of ServerProxy. At the same time, the operation of releaseBuffer is the same as above. It is also called directly through the destructor of TrackBase, so there is no need to release Buffer by calling.

@ 3 Data synchronization

There is no data synchronization problem in MODE_STATIC mode.

3.3 Summary of data transfer

@ 1 For different MODEs, these Proxy points to different objects:

  1. AudioTrack contains mProxy, which is used to manage shared memory, and contains bufferBuffer and releaseBuffer functions.
  2. Track contains mServerProxy, which is used to manage shared memory, and contains bufferBuffer, releaseBuffer functions

@ 2 AudioTrack and AudioFlinger use mCblkMemory to realize "producer-consumer" data interaction. Let's analyze the principle of data exchange between ServerProxy and ClientProxy through shared memory

  1. When creating a track, AudioFlinger will allocate audio shared memory to each track. AudioTrack and AudioFlinger use the buffer as a parameter to create mClientProxy and mServerProxy through AudioTrackClientProxy and AudioTrackServerProxy.
  2. AudioTrack (APP application side) writes data to the shared buffer through mClientProxy, and AudioFlinger (server side) reads data from shared memory through mServerProxy. In this way, the client and server form a producer and consumer model for the shared memory through the proxy.

@ 3 AudioTrackClientProxy, AudioTrackServerProxy (both classes are located in AudioTrackShared.cpp) respectively encapsulate the use of obtainBuffer and releaseBuffer on the client side and server side shared buffer, the functions of these interfaces are as follows:

@@ 3.1 Client:

  1. AudioTrackClientProxy :: obtainBuffer () gets continuous empty buffers from audio buffer;
  2. AudioTrackClientProxy :: releaseBuffer () puts the buffer filled with data back into the audio buffer.

@@ 3.2 Server side:

  1. AudioTrackServerProxy :: obtainBuffer () obtains a continuous data-filled buffer from the audio buffer;
  2. AudioTrackServerProxy :: releaseBuffer () puts the empty buffer used back into the audio buffer.
Published 289 original articles · praised 47 · 30,000+ views

Guess you like

Origin blog.csdn.net/vviccc/article/details/105312220