Android Framework 音频子系统(07)AudioTrack数据传递

该系列文章总纲链接:专题分纲目录 Android Framework 音频子系统​​​​​​​


本章关键点总结 & 说明:

本章节主要关注➕ 以上思维导图左上 AudioTrack 部分流程分析 的 子分支数据传递分析 即可。本章节主要分析AudioTrack的两种模式以及APP的AudioTrack 和 playbackThread中mTracks的track 之间 建立共享内存是如何实现的。


1 AudioTrack端 建立共享内存

1.1 AudioTrack的两种模式

APP创建AudioTrack,会和  AudioFlinger中PlaybackThread创建的Track 相对应。APP给AudioTrack提供音频数据有2种模式: 一次性提供(MODE_STATIC模式)、边播放边提供(MODE_STREAM模式)。共享内存方式采用。这两种模式在两个方面有所不同:

@1 共享内存的2种模式

  1. MODE_STATIC模式:一次性提前提供数据。使用APP创建共享内存, APP一次性填充数据。playbackthread等数据构造好,取出数据就可以直接使用了,不存在同步问题。对应的playbackThread工作是:获得含有数据的obtainBuffer(APP一次性提交共享内存的数据有可能很多,playbackThread需要进行多次播放)。完成后释放buffer。
  2. MODE_STREAM模式:边播放边提供。使用playbackThread创建共享内存。APP使用obtainBuffer获得buffer, 填充数据后使用releaseBuffer释放buffer。playbackThread一样需要进行多次播放。只不过这里使用的是 环形缓冲区机制来,不断传递数据。完成后释放buffer。

1.2 AudioTrack构造器回顾

接下来我们基于此开始分析代码,这里开始 我们对两种模式分别进行分析,先从之前的Java层AudioTrack对象的创建开始分析,后面导致了Native层的AudioTrack对象的创建。分析Java层的AudioTrack,根据上一节的分析,我们回顾下调用栈:

Java::AudioTrack->Java::native_setup->JNI转换->android_media_AudioTrack_setup

我们从android_media_AudioTrack_setup的实现开始继续分析,代码如下:

static jint
android_media_AudioTrack_setup(JNIEnv *env, jobject thiz, jobject weak_this,
        jobject jaa,
        jint sampleRateInHertz, jint javaChannelMask,
        jint audioFormat, jint buffSizeInBytes, jint memoryMode, jintArray jSession) {
	//...
    //关键点1:创建native AudioTrack对象
    sp<AudioTrack> lpTrack = new AudioTrack();
	//...
    switch (memoryMode) {//这里开始针对 两种模式MODE_STREAM 和 MODE_STATIC进行不同参数的设置
    case MODE_STREAM:
        //关键点2.1:set方法,设置参数
        //注意:这里APP不分配内存而是在后面的playbackthread中分配
        //因此share memory的值为“空”
        status = lpTrack->set(
                AUDIO_STREAM_DEFAULT,// stream type, but more info conveyed in paa (last argument)
                sampleRateInHertz,
                format,// word length, PCM
                nativeChannelMask,
                frameCount,
                AUDIO_OUTPUT_FLAG_NONE,
                audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user)
                0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
                0,// shared mem,
                true,// thread can call Java
                sessionId,// audio session ID
                AudioTrack::TRANSFER_SYNC,
                NULL,                         // default offloadInfo
                -1, -1,                       // default uid, pid values
                paa);
        break;
    case MODE_STATIC:
        //应用端申请共享内存
        if (!lpJniStorage->allocSharedMem(buffSizeInBytes)) {
            ALOGE("Error creating AudioTrack in static mode: error creating mem heap base");
            goto native_init_failure;
        }
        //关键点2.2:set方法,设置参数
        //因此share memory的值为 应用端申请共享内存 首地址
        status = lpTrack->set(
                AUDIO_STREAM_DEFAULT,// stream type, but more info conveyed in paa (last argument)
                sampleRateInHertz,
                format,// word length, PCM
                nativeChannelMask,
                frameCount,
                AUDIO_OUTPUT_FLAG_NONE,
                audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user));
                0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
                lpJniStorage->mMemBase,// shared mem
                true,// thread can call Java
                sessionId,// audio session ID
                AudioTrack::TRANSFER_SHARED,
                NULL,                         // default offloadInfo
                -1, -1,                       // default uid, pid values
                paa);
        break;
	//...
    default:
        ALOGE("Unknown mode %d", memoryMode);
        goto native_init_failure;
    }
	//...
    return (jint) AUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;
}

总结下,这里的模式的设置是起源于Java层的,在进入到C++层时:

  1. 模式如果是MODE_STATIC:先申请内存(allocSharedMem方法),再执行set方法。
  2. 模式如果是MODE_STREAM:直接执行set方法,内存后面由playbackthread来申请。

1.3 AudioTrack中的共享内存操作分析

根据上一节的分析,我们回顾下调用栈:

AudioTrack::set->AudioTrack::createTrack_l

在createTrack_l函数中,共享内存相关的操作为:

status_t AudioTrack::createTrack_l()
{
    const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();
    //...
    // Starting address of buffers in shared memory.  If there is a shared buffer, buffers
    // is the value of pointer() for the shared buffer, otherwise buffers points
    // immediately after the control block.  This address is for the mapping within client
    // address space.  AudioFlinger::TrackBase::mBuffer is for the server address space.
    void* buffers;
    if (mSharedBuffer == 0) {
        //指向 playbackthread提供的Buffer
        buffers = (char*)cblk + sizeof(audio_track_cblk_t);
    } else {
        //指向应用端APP提供的Buffer
        buffers = mSharedBuffer->pointer();
    }
    //...
    /* update proxy,APP的AudioTack 与Thread中的Track建立共享内存,这里的
     * AudioTrackClientProxy和StaticAudioTrackClientProxy用来管理Buffer
     */
    if (mSharedBuffer == 0) {
        mStaticProxy.clear();
        mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
    } else {
        mStaticProxy = new StaticAudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
        mProxy = mStaticProxy;
    }
    //...

    return NO_ERROR;
    }
    //...
    return status;
}

根据 上一节 的分析,我们知道 APP端的AudioTrack的创建必然意味着 AudioFlinger::PlaybackThread 中 Track 的创建。因此接下来专注分析PlaybackThread的Track对象。


2 PlaybackThread的Track建立共享内存

这里的Track是继承TrackBase的,而TrackBase中有我们需要分析的关于共享内存管理的重要内容,这里主要关注sharedBuffer,代码如下:

// TrackBase constructor must be called with AudioFlinger::mLock held
AudioFlinger::ThreadBase::TrackBase::TrackBase(
            ThreadBase *thread,
            const sp<Client>& client,
            //...
            alloc_type alloc,
            track_type type)
    :   RefBase(),
        mThread(thread),
        mClient(client),
        mCblk(NULL),
        //...
{
    // if the caller is us, trust the specified uid
    if (IPCThreadState::self()->getCallingPid() != getpid_cached || clientUid == -1) {
        int newclientUid = IPCThreadState::self()->getCallingUid();
        if (clientUid != -1 && clientUid != newclientUid) {
            ALOGW("uid %d tried to pass itself off as %d", newclientUid, clientUid);
        }
        clientUid = newclientUid;
    }
	
    mUid = clientUid;
    size_t size = sizeof(audio_track_cblk_t);//头部结构体
    size_t bufferSize = (buffer == NULL ? roundup(frameCount) : frameCount) * mFrameSize;
    if (buffer == NULL && alloc == ALLOC_CBLK) {
        size += bufferSize;
    }

    if (client != 0) {
        //若APP端提供Buffer,则这里只分配CBLK,否则分配CBLK+Buffer
        mCblkMemory = client->heap()->allocate(size);
        if (mCblkMemory == 0 ||
                (mCblk = static_cast<audio_track_cblk_t *>(mCblkMemory->pointer())) == NULL) {
            client->heap()->dump("AudioTrack");
            mCblkMemory.clear();
            return;
        }
    } else {
        // this syntax avoids calling the audio_track_cblk_t constructor twice
        mCblk = (audio_track_cblk_t *) new uint8_t[size];
        // assume mCblk != NULL
    }

    // construct the shared structure in-place.
    if (mCblk != NULL) {
        new(mCblk) audio_track_cblk_t();
        switch (alloc) {
        case ALLOC_READONLY: {
            const sp<MemoryDealer> roHeap(thread->readOnlyHeap());
            if (roHeap == 0 ||
                    (mBufferMemory = roHeap->allocate(bufferSize)) == 0 ||
                    (mBuffer = mBufferMemory->pointer()) == NULL) {
                ALOGE("not enough memory for read-only buffer size=%zu", bufferSize);
                if (roHeap != 0) {
                    roHeap->dump("buffer");
                }
                mCblkMemory.clear();
                mBufferMemory.clear();
                return;
            }
            memset(mBuffer, 0, bufferSize);
            } break;
        case ALLOC_PIPE:
            mBufferMemory = thread->pipeMemory();
            mBuffer = NULL;
            break;
        case ALLOC_CBLK:
            // buffer的初始化
            if (buffer == NULL) {//指向playbackthread提供的Buffer
                mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t);
                memset(mBuffer, 0, bufferSize);
            } else {
                mBuffer = buffer;//指向APP提供的Buffer
            }
            break;
        case ALLOC_LOCAL:
            mBuffer = calloc(1, bufferSize);
            break;
        case ALLOC_NONE:
            mBuffer = buffer;
            break;
        }
    }
}

以上主要是Buffer的创建和分配。接下来 分析 Track的实现,代码如下:

AudioFlinger::PlaybackThread::Track::Track(
            PlaybackThread *thread,
            const sp<Client>& client,
            //...
            const sp<IMemory>& sharedBuffer,
            //...
            track_type type)
    :   TrackBase(thread, client, sampleRate, format, channelMask, frameCount,
                  (sharedBuffer != 0) ? sharedBuffer->pointer() : buffer,
                  sessionId, uid, flags, true /*isOut*/,
                  (type == TYPE_PATCH) ? ( buffer == NULL ? ALLOC_LOCAL : ALLOC_NONE) : ALLOC_CBLK,
                  type),
    mFillingUpStatus(FS_INVALID),
    // mRetryCount initialized later when needed
    mSharedBuffer(sharedBuffer),
    mStreamType(streamType),
    //...
{
    //共享内存相关代码,这里的AudioTrackServerProxy和StaticAudioTrackServerProxy用来管理Buffer
    if (sharedBuffer == 0) {
        mAudioTrackServerProxy = new AudioTrackServerProxy(mCblk, mBuffer, frameCount,
                mFrameSize, !isExternalTrack(), sampleRate);
    } else {
        mAudioTrackServerProxy = new StaticAudioTrackServerProxy(mCblk, mBuffer, frameCount,
                mFrameSize);
    }
    mServerProxy = mAudioTrackServerProxy;
    mName = thread->getTrackName_l(channelMask, format, sessionId);
    //...
    // only allocate a fast track index if we were able to allocate a normal track name
    if (flags & IAudioFlinger::TRACK_FAST) {
        mAudioTrackServerProxy->framesReadyIsCalledByMultipleThreads();
        int i = __builtin_ctz(thread->mFastTrackAvailMask);
        mFastIndex = i;
        // Read the initial underruns because this field is never cleared by the fast mixer
        mObservedUnderruns = thread->getFastTrackUnderruns(i);
        thread->mFastTrackAvailMask &= ~(1 << i);
    }
}

总结下:

  1. AudioTrack中使用AudioTrackClientProxy对象 和 StaticAudioTrackClientProxy对象 来管理共享内存。
  2. Track中使用AudioTrackServerProxy对象 和 StaticAudioTrackServerProxy对象 来管理共享内存。

3 音频数据的传递

音频数据的传递是通过AudioTack的write方法来实现的,基于第5章节的分析栈继续分析track的write方法。之前的代码栈如下:

Java层AudioTrack.write->native_write_XXX->writeToTrack
->C++层track->sharedBuffer() 或 C++层track.write

这里的writeToTrack的代码实现如下:

jint writeToTrack(const sp<AudioTrack>& track, jint audioFormat, const jbyte* data,
                  jint offsetInBytes, jint sizeInBytes, bool blocking = true) {
    ssize_t written = 0;
    //playbackthread提供共享内存,调用C++层track的write函数
    if (track->sharedBuffer() == 0) {
        written = track->write(data + offsetInBytes, sizeInBytes, blocking);
        if (written == (ssize_t) WOULD_BLOCK) {
            written = 0;
        }
    } else {//应用端 提供共享内存,直接执行memcpy
        const audio_format_t format = audioFormatToNative(audioFormat);
        switch (format) {

        default:
        case AUDIO_FORMAT_PCM_FLOAT:
        case AUDIO_FORMAT_PCM_16_BIT: {
            if ((size_t)sizeInBytes > track->sharedBuffer()->size()) {
                sizeInBytes = track->sharedBuffer()->size();
            }
            //这里将data数据拷贝给 共享内存
            memcpy(track->sharedBuffer()->pointer(), data + offsetInBytes, sizeInBytes);
            written = sizeInBytes;
            } break;

        case AUDIO_FORMAT_PCM_8_BIT: {
            //功能同上,只是8位需要中间的数据转换环节
            if (((size_t)sizeInBytes)*2 > track->sharedBuffer()->size()) {
                sizeInBytes = track->sharedBuffer()->size() / 2;
            }
            int count = sizeInBytes;
            int16_t *dst = (int16_t *)track->sharedBuffer()->pointer();
            const uint8_t *src = (const uint8_t *)(data + offsetInBytes);
            memcpy_to_i16_from_u8(dst, src, count);
            written = sizeInBytes;
            } break;
        }
    }
    return written;
}

同时回顾如下说明:

  1. 如果track->sharedBuffer() == 0,即由playbackthread提供共享内存,则执行C++层track的write方法。
  2. 如果track->sharedBuffer() != 0,即由APP端提供共享内存,则直接执行memcpy操作,给track->sharedBuffer()赋值。

3.1 MODE_STREAM模式下的数据传递流程

@1 客户端proxy流程

这里继续分析track的write方法,代码如下:

ssize_t AudioTrack::write(const void* buffer, size_t userSize, bool blocking)
{
    //...
    size_t written = 0;
    Buffer audioBuffer;

    while (userSize >= mFrameSize) {
        audioBuffer.frameCount = userSize / mFrameSize;
        //关键点1:获取共享内存Buffer
        status_t err = obtainBuffer(&audioBuffer,
                blocking ? &ClientProxy::kForever : &ClientProxy::kNonBlocking);
        //...
        size_t toWrite;
        //buffer拷贝数据到audioBuffer中
        if (mFormat == AUDIO_FORMAT_PCM_8_BIT && !(mFlags & AUDIO_OUTPUT_FLAG_DIRECT)) {
            toWrite = audioBuffer.size >> 1;
            memcpy_to_i16_from_u8(audioBuffer.i16, (const uint8_t *) buffer, toWrite);
        } else {
            toWrite = audioBuffer.size;
            memcpy(audioBuffer.i8, buffer, toWrite);
        }
        //计算剩余数据
        buffer = ((const char *) buffer) + toWrite;
        userSize -= toWrite;
        written += toWrite;
        //关键点2:释放Buffer
        releaseBuffer(&audioBuffer);
    }
    //释放共享内存
    return written;
}

继续分析obtainBuffer的实现,代码如下:

status_t AudioTrack::obtainBuffer(Buffer* audioBuffer, int32_t waitCount)
{
    ...//参数转换跟计算
    return obtainBuffer(audioBuffer, requested);
}

继续分析转换参数后的obtainBuffer的实现,代码如下:

status_t AudioTrack::obtainBuffer(Buffer* audioBuffer, const struct timespec *requested,
        struct timespec *elapsed, size_t *nonContig)
{
    ...//参数转换
    status = proxy->obtainBuffer(&buffer, requested, elapsed);
    ...//结果的填充
}

对于MODE_STREAM模式来说,因为mSharedBuffer == 0,这里的proxy是AudioTrackClientProxy

status_t AudioTrack::createTrack_l(){
   //...
    void* buffers;
    if (mSharedBuffer == 0) {
        buffers = (char*)cblk + sizeof(audio_track_cblk_t);
    } else {
        buffers = mSharedBuffer->pointer();
    }
	//...
    // update proxy
    if (mSharedBuffer == 0) {
        mStaticProxy.clear();
        mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
    } else {
        mStaticProxy = new StaticAudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
        mProxy = mStaticProxy;
    }
    //...
}

而AudioTrackClientProxy中并没有obtainBuffer方法,实际上这里是调用它的父类ClientProxy中的obtainBuffer方法,也就是通过ClientProxy来获取一个空白的Buffer,然后将音频数据写入到Buffer中,最后 releaseBuffer。

@2 服务端流程

这里以Track的getNextBuffer(获取Buffer)方法作为入口分析:

// AudioBufferProvider interface
status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(
        AudioBufferProvider::Buffer* buffer, int64_t pts __unused)
{
    ServerProxy::Buffer buf;
    size_t desiredFrames = buffer->frameCount;
    buf.mFrameCount = desiredFrames;
    //这里调用mServerProxy的obtainBuffer方法
    status_t status = mServerProxy->obtainBuffer(&buf);
    buffer->frameCount = buf.mFrameCount;
    buffer->raw = buf.mRaw;
    if (buf.mFrameCount == 0) {
        mAudioTrackServerProxy->tallyUnderrunFrames(desiredFrames);
    }
    return status;
}

这里继续分析mServerProxy(MODE_STREAM模式下mServerProxy=AudioTrackServerProxy)的obtainBuffer方法,而AudioTrackServerProxy 中并没有 obtainBuffer 方法,这里是调用它的父类ServerProxy中的obtainBuffer方法来获取一个有数据的Buffer。注意:最后 releaseBuffer 是通过TrackBase的析构函数直接调用的。TrackBase的析构函数代码如下:

AudioFlinger::ThreadBase::TrackBase::~TrackBase()
{
    // delete the proxy before deleting the shared memory it refers to, to avoid dangling reference
    delete mServerProxy;
    if (mCblk != NULL) {
        if (mClient == 0) {
            delete mCblk;
        } else {
            mCblk->~audio_track_cblk_t();   // destroy our shared-structure.
        }
    }
    mCblkMemory.clear();    // free the shared memory before releasing the heap it belongs to
    if (mClient != 0) {
        // Client destructor must run with AudioFlinger client mutex locked
        Mutex::Autolock _l(mClient->audioFlinger()->mClientLock);
        // If the client's reference count drops to zero, the associated destructor
        // must run with AudioFlinger lock held. Thus the explicit clear() rather than
        // relying on the automatic clear() at end of scope.
        mClient.clear();
    }
    // flush the binder command buffer
    IPCThreadState::self()->flushCommands();
}

也就是说,这里是不必再通过调用releaseBuffer来释放共享内存的。

@3 数据同步

MODE_STREAM模式 会使用到环形缓存区来同步数据,一个生产数据,一个消费数据,这个时候使用环形缓冲区是最可靠的。实际上就是 ClientProxy::obtainBuffer、ClientProxy::releaseBuffer、ServerProxy::obtainBuffer、ServerProxy::releaseBuffer之间的协作,这里对环形缓冲区的逻辑进行简述,了解其中的原理。音频流数据分为两个部分数据头 和数据本身,如下所示:

同时对于环形缓冲区,有几个关键变量:mFront(读指针R),mRear(写指针W),mFrameCount(数据长度LEN),mFrameCountP2(数据长度LEN,取2的N次方)。这里接下来以例子和伪代码的形式对环形缓冲区的逻辑进行说明。如下所示:

这里 环形缓冲区的逻辑处理如下:

环形缓冲区:初始R=0,W=0,buf长度为LEN
写入一个数据流程:w=W%LEN ; buf[w] = data ; W++ ;
读取一个数据流程:r=R%LEN ; buf[r] = data ; R++;
判断 环形缓冲区为空:R==W
判断 满:W-R == LEN

这里注意:在数学上当LEN为2的N次方时,以下运算是等价的:

w=W%LEN 等价于 w=W &(LEN-1)
r=R%LEN 等价于 r=R &(LEN-1)

3.2 MODE_STATIC模式下的数据传递流程

@1 客户端流程

前面的native层代码writeToTrack方法执行后会直接向track->sharedBuffer()->pointer()中写入数据,而对于MODE_STATIC模式来说,因为mSharedBuffer != 0,所以这里的proxy是StaticAudioTrackClientProxy。

status_t AudioTrack::createTrack_l(){
   //...
    void* buffers;
    if (mSharedBuffer == 0) {
        buffers = (char*)cblk + sizeof(audio_track_cblk_t);
    } else {
        buffers = mSharedBuffer->pointer();
    }
	//...
    // update proxy
    if (mSharedBuffer == 0) {
        mStaticProxy.clear();
        mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
    } else {
        mStaticProxy = new StaticAudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
        mProxy = mStaticProxy;
    }
    //...
}

而StaticAudioTrackClientProxy中并没有obtainBuffer方法,实际上这里是调用它的父类ClientProxy中的obtainBuffer方法,也就是通过ClientProxy来获取一个空白的Buffer,然后将音频数据写入到Buffer中,最后 releaseBuffer。

@2 服务端流程

这里依然以上面Track的getNextBuffer(获取Buffer)方法作为入口分析:

// AudioBufferProvider interface
status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(
        AudioBufferProvider::Buffer* buffer, int64_t pts __unused)
{
    ServerProxy::Buffer buf;
    size_t desiredFrames = buffer->frameCount;
    buf.mFrameCount = desiredFrames;
    //这里调用mServerProxy的obtainBuffer方法
    status_t status = mServerProxy->obtainBuffer(&buf);
    buffer->frameCount = buf.mFrameCount;
    buffer->raw = buf.mRaw;
    if (buf.mFrameCount == 0) {
        mAudioTrackServerProxy->tallyUnderrunFrames(desiredFrames);
    }
    return status;
}

这里继续分析mServerProxy(MODE_STATIC模式下mServerProxy=StaticAudioTrackServerProxy)的obtainBuffer方法,而StaticAudioTrackServerProxy 中 重写了ServerProxy的obtainBuffer 方法。同时关于releaseBuffer的操作同上, 它也是通过TrackBase的析构函数直接调用的,因此不必通过调用来释放Buffer。

@3 数据同步

MODE_STATIC 模式下不存在数据同步的问题。

3.3 数据传递总结

@1 对于不同的MODE, 这些Proxy指向不同的对象:

  1. AudioTrack中含有mProxy,  它被用来管理共享内存, 里面含有obtainBuffer, releaseBuffer函数。
  2. Track中含有mServerProxy, 它被用来管理共享内存, 里面含有obtainBuffer, releaseBuffer函数

@2 AudioTrack和AudioFlinger通过mCblkMemory这块内存来实现“生产者-消费者”数据交互,下面我们来分析一下ServerProxy和ClientProxy通过共享内存进行数据交互的原理:

  1. 创建 track 时 AudioFlinger 会给每个 track 分配 audio 共享内存,AudioTrack、AudioFlinger 以该buffer 为参数通过 AudioTrackClientProxy、AudioTrackServerProxy 创建 mClientProxy、mServerProxy。
  2. AudioTrack( APP应用端)通过 mClientProxy 向共享 buffer 写入数据, AudioFlinger(server 端)通过 mServerProxy 从共享内存中 读出数据。这样 client、server 通过 proxy 对共享内存 形成了生产者、消费者模型。

@3 AudioTrackClientProxy、AudioTrackServerProxy( 这两个类都位于 AudioTrackShared.cpp )分别封装了 client 端、server 端共享 buffer 的使用方法 obtainBuffer 和 releaseBuffer,这些接口的功能如下:

@@3.1 Client 端:

  1. AudioTrackClientProxy:: obtainBuffer()从 audio buffer 获取连续的空buffer;
  2. AudioTrackClientProxy:: releaseBuffer ()将填充了数据的 buffer 放回 audio buffer。

@@3.2 Server 端:

  1. AudioTrackServerProxy:: obtainBuffer()从 audio buffer 获取连续的填充了数据的 buffer;
  2. AudioTrackServerProxy:: releaseBuffer ()将使用完的空buffer 放回 audio buffer。
发布了289 篇原创文章 · 获赞 47 · 访问量 3万+

猜你喜欢

转载自blog.csdn.net/vviccc/article/details/105312220