Android audio subsystem (3) ------AudioTrack process analysis

Hello! Here is Kite's blog,

Welcome to communicate with me.


Here is an example of Android N:

Before understanding AudioTrack, I found a picture from the Internet to briefly describe the correspondence between AudioTrack, PlaybackThread, and output stream devices:
AudioFlinger
Generally speaking, the output stream device determines what type of PlaybackThread it corresponds to, and the PlaybackThread instance and output stream device It is one-to-one correspondence (OffloadThread will only output audio data to the compress_offload device, MixerThread(with FastMixer) will only output audio data to the low_latency device)

From the relationship diagram of AudioTrack, PlaybackThread, and output stream device, we can see that AudioTrack sends audio stream data to the corresponding PlaybackThread, then if the application process wants to control these audio streams, such as starting to play start(), stop playing stop(), pause playback pause(), what should I do? Note that the application process and AudioFlinger are not on the same process. This requires AudioFlinger to provide audio stream management functions and provide a set of communication interfaces that allow application processes to control the audio stream status in AudioFlinger across processes.

AudioFlinger audio stream management is implemented by AudioFlinger::PlaybackThread::Track. Track and AudioTrack have a one-to-one relationship. In the application, every time an AudioTrack is created, on the AudioFlinger side, a Track will be created in a PlaybackThread in AudioFlinger corresponding to it;

PlaybackThread and AudioTrack/Track have a one-to-many relationship, and one PlaybackThread can hang multiple Tracks. Between Track and AudioTrack, audio data is transmitted through shared memory, which can be divided into two situations:
1.MODE_STATIC: The data is delivered to the other party at one time to complete the transmission of all data, which is simple and efficient. It is suitable for playback operations that require little memory, such as ringtones and system reminders.
2.MODE_STREAM: The stream mode is similar to the network-based audio stream playback, and the audio data is passed to the receiver multiple times in strict accordance with the requirements until the end. It is usually applicable when the audio file is large; the audio property requirements are high, such as data with high sampling rate and large depth.

  • AudioFlinger::PlaybackThread : Playback thread base class, audio streams with different output identifiers correspond to different types of PlaybackThread instances
  • AudioFlinger::PlaybackThread::Track : Audio stream management class, creating an anonymous shared memory for data exchange between AudioTrack and AudioFlinger
  • AudioFlinger::TrackHandle: The Track object is only responsible for the audio stream management business. It does not provide a cross-process Binder call interface to the outside world, and the application process needs to control the audio stream, so an object is needed to proxy Track's cross-process communication. This role It is TrackHandle, through which AudioTrack interacts with Track
  • AudioTrack: An API class provided by the Android audio system, responsible for audio stream data output; each audio stream corresponds to an AudioTrack instance, and AudioTrack with different output identifiers will match different AudioFlinger::PlaybackThread
  • AudioTrack::AudioTrackThread : When the data transmission mode is TRANSFER_CALLBACK, this thread needs to be created. It actively requests data from the user process and fills it into the Buffer by calling the audioCallback callback function; when the data transmission mode is TRANSFER_SYNC, this thread does not need to be created. Because the user process will continue to call AudioTrack.write() to fill data into the Buffer; when the data transfer mode is TRANSFER_SHARED, there is no need to create this thread, because the user process will create an anonymous shared memory and copy the audio data to be played at one time To this piece of anonymous shared memory

There is an application example of Audiotrack in the source code, which is used to test the maximum volume of the left and right stereo channels:

//frameworks/base/media/tests/MediaFrameworkTest/src/com/android/mediaframeworktest/functional/audio/MediaAudioTrackTest.java
    public void testSetStereoVolumeMax() throws Exception {
    
    
        // constants for test
        final String TEST_NAME = "testSetStereoVolumeMax";
        final int TEST_SR = 22050;
        final int TEST_CONF = AudioFormat.CHANNEL_OUT_STEREO;
        final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
        final int TEST_MODE = AudioTrack.MODE_STREAM;
        final int TEST_STREAM_TYPE = AudioManager.STREAM_MUSIC;

        //-------- initialization --------------
        int minBuffSize = AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT);
        AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT,
                minBuffSize, TEST_MODE);
        byte data[] = new byte[minBuffSize/2];
        //--------    test        --------------
        track.write(data, 0, data.length);
        track.write(data, 0, data.length);
        track.play();
        float maxVol = AudioTrack.getMaxVolume();//获取最大音量值
        assertTrue(TEST_NAME, track.setStereoVolume(maxVol, maxVol) == AudioTrack.SUCCESS);
        //-------- tear down      --------------
        track.release();
    }

This demo contains the general operation of AudioTrack:
Step1: getMinBufferSize, calculate the minimum Buffer size
Step2: create an audiotrack object
Step3: write to write audio data
Step4: play starts to play audio
Step5: release ends playback

//@AudioTrack.cpp
AudioTrack::AudioTrack(
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t frameCount,
        audio_output_flags_t flags,
        callback_t cbf,
        void* user,
        int32_t notificationFrames,
        audio_session_t sessionId,
        transfer_type transferType,
        const audio_offload_info_t *offloadInfo,
        int uid,
        pid_t pid,
        const audio_attributes_t* pAttributes,
        bool doNotReconnect,
        float maxRequiredSpeed)
    : mStatus(NO_INIT),
      mState(STATE_STOPPED),
      mPreviousPriority(ANDROID_PRIORITY_NORMAL),
      mPreviousSchedulingGroup(SP_DEFAULT),
      mPausedPosition(0),
      mSelectedDeviceId(AUDIO_PORT_HANDLE_NONE)
{
    
    
    mStatus = set(streamType, sampleRate, format, channelMask,
            frameCount, flags, cbf, user, notificationFrames,
            0 /*sharedBuffer*/, false /*threadCanCallJava*/, sessionId, transferType,
            offloadInfo, uid, pid, pAttributes, doNotReconnect, maxRequiredSpeed);
}
status_t AudioTrack::set(
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t frameCount,
        audio_output_flags_t flags,
        callback_t cbf,
        void* user,
        int32_t notificationFrames,
        const sp<IMemory>& sharedBuffer,
        bool threadCanCallJava,
        audio_session_t sessionId,
        transfer_type transferType,
        const audio_offload_info_t *offloadInfo,
        int uid,
        pid_t pid,
        const audio_attributes_t* pAttributes,
        bool doNotReconnect,
        float maxRequiredSpeed)
{
    
    
	//......
	// handle default values first.
    if (streamType == AUDIO_STREAM_DEFAULT) {
    
    
        streamType = AUDIO_STREAM_MUSIC;//default则默认为AUDIO_STREAM_MUSIC
    }
    if (pAttributes == NULL) {
    
    
    	//不会超过AUDIO_STREAM_PUBLIC_CNT,一共有13种类型可选择
        if (uint32_t(streamType) >= AUDIO_STREAM_PUBLIC_CNT) {
    
    
            ALOGE("Invalid stream type %d", streamType);
            return BAD_VALUE;
        }
        //mStreamType赋值,createTrack_l里会用到
        mStreamType = streamType;
    } else {
    
    
        // stream type shouldn't be looked at, this track has audio attributes
        memcpy(&mAttributes, pAttributes, sizeof(audio_attributes_t));
        //mStreamType赋值,createTrack_l里会用到
        mStreamType = AUDIO_STREAM_DEFAULT;
        if ((mAttributes.flags & AUDIO_FLAG_HW_AV_SYNC) != 0) {
    
    
            flags = (audio_output_flags_t)(flags | AUDIO_OUTPUT_FLAG_HW_AV_SYNC);
        }
        if ((mAttributes.flags & AUDIO_FLAG_LOW_LATENCY) != 0) {
    
    
            flags = (audio_output_flags_t) (flags | AUDIO_OUTPUT_FLAG_FAST);
        }
    }
	//......	
	//mFlags赋值,createTrack_l里会用到
    mOrigFlags = mFlags = flags;
    mCbf = cbf;
    if (cbf != NULL) {
    
    
        mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);
        mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);
        // thread begins in paused state, and will not reference us until start()
    }
    // create the IAudioTrack
    status_t status = createTrack_l();
}
  • 1. First determine the streamType, and assign a value to the AudioTrack member mStreamType.
  • 2. Assign a value to the AudioTrack member mFlags, and the flags are passed in when the AudioTrack is constructed.
  • 3. If cbf (audioCallback callback function) is not empty, create an AudioTrackThread thread to process the audioCallback callback function (in MODE_STREAM mode, cbf is empty);
  • 4.run, but it will not be executed immediately, temporarily pause until it is started (thread begins in paused state, and will not reference us until start());
  • 5. Call createTrack_l method to create IAudioTrack;
status_t AudioTrack::createTrack_l()
{
    
    
	const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();

	status = AudioSystem::getOutputForAttr(attr, &output,
                                           mSessionId, &streamType, mClientUid,
                                           mSampleRate, mFormat, mChannelMask,
                                           mFlags, mSelectedDeviceId, mOffloadInfo);

    sp<IAudioTrack> track = audioFlinger->createTrack(streamType,
                                                      mSampleRate,
                                                      mFormat,
                                                      mChannelMask,
                                                      &temp,
                                                      &flags,
                                                      mSharedBuffer,
                                                      output,
                                                      mClientPid,
                                                      tid,
                                                      &mSessionId,
                                                      mClientUid,
                                                      &status);
	// update proxy
	//用于管理共享内存
    if (mSharedBuffer == 0) {
    
    //MODE_STREAM
        mStaticProxy.clear();
        mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSize);
    } else {
    
    //MODE_STATIC
        mStaticProxy = new StaticAudioTrackClientProxy(cblk, buffers, frameCount, mFrameSize);
        mProxy = mStaticProxy;
    }

}
  • 1. Get audioFlinger
  • 2. getOutputForAttr will set an attribute according to the sound type passed in by AudioTrack, then determine its group/category according to the sound attribute, find the device, and finally find the corresponding output according to the device. (A device may correspond to multiple outputs (each sound card corresponds to an output), but only one is selected)
  • 3. Create AudioFlinger::PlaybackThread::Track. (During the AudioTrack creation process, he will choose an output, an output corresponds to a playback device, and it also corresponds to a PlaybackThread, the AudioTrack of the application corresponds to the Track in the PlaybackThread)
  • 4. Create AudioTrackClientProxy/StaticAudioTrackClientProxy to manage shared memory. (AudioTrack of APP <==> Track of Thread transfers data through shared memory)

The next function call stack is as follows:

AudioTrack::createTrack_l
	AudioSystem::getOutputForAttr
		AudioPolicyService::getOutputForAttr
			AudioPolicyManager::getOutputForAttr
				AudioPolicyManager::getOutputForDevice
					AudioPolicyService::AudioPolicyClient::openOutput
						af->openOutput
	AudioFlinger::createTrack
		AudioFlinger::PlaybackThread::createTrack_l
			AudioFlinger::PlaybackThread::Track::Track
				AudioFlinger::ThreadBase::TrackBase::TrackBase
				new AudioTrackServerProxy/StaticAudioTrackServerProxy
		new TrackHandle
	new AudioTrackClientProxy/StaticAudioTrackClientProxy

Look at AudioSystem::getOutputForAttr

//@AudioTrack.cpp
status_t AudioSystem::getOutputForAttr(const audio_attributes_t *attr,
                                        audio_io_handle_t *output,
                                        audio_session_t session,
                                        audio_stream_type_t *stream,
                                        uid_t uid,
                                        uint32_t samplingRate,
                                        audio_format_t format,
                                        audio_channel_mask_t channelMask,
                                        audio_output_flags_t flags,
                                        audio_port_handle_t selectedDeviceId,
                                        const audio_offload_info_t *offloadInfo)
{
    
    
    const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
    if (aps == 0) return NO_INIT;
    return aps->getOutputForAttr(attr, output, session, stream, uid,
                                 samplingRate, format, channelMask,
                                 flags, selectedDeviceId, offloadInfo);
}
//@AudioPolicyInterfaceImpl.cpp
status_t AudioPolicyService::getOutputForAttr(const audio_attributes_t *attr,
                                              audio_io_handle_t *output,
                                              audio_session_t session,
                                              audio_stream_type_t *stream,
                                              uid_t uid,
                                              uint32_t samplingRate,
                                              audio_format_t format,
                                              audio_channel_mask_t channelMask,
                                              audio_output_flags_t flags,
                                              audio_port_handle_t selectedDeviceId,
                                              const audio_offload_info_t *offloadInfo)
{
    
    
    if (mAudioPolicyManager == NULL) {
    
    
        return NO_INIT;
    }
    ALOGV("getOutput()");
    Mutex::Autolock _l(mLock);

    const uid_t callingUid = IPCThreadState::self()->getCallingUid();
    return mAudioPolicyManager->getOutputForAttr(attr, output, session, stream, uid, samplingRate,
                                    format, channelMask, flags, selectedDeviceId, offloadInfo);
}
//@AudioPolicyManager.cpp
status_t AudioPolicyManager::getOutputForAttr(const audio_attributes_t *attr,
                                              audio_io_handle_t *output,
                                              audio_session_t session,
                                              audio_stream_type_t *stream,
                                              uid_t uid,
                                              uint32_t samplingRate,
                                              audio_format_t format,
                                              audio_channel_mask_t channelMask,
                                              audio_output_flags_t flags,
                                              audio_port_handle_t selectedDeviceId,
                                              const audio_offload_info_t *offloadInfo)
{
    
    
    audio_attributes_t attributes;
    if (attr != NULL) {
    
    
        if (!isValidAttributes(attr)) {
    
    
            ALOGE("getOutputForAttr() invalid attributes: usage=%d content=%d flags=0x%x tags=[%s]",
                  attr->usage, attr->content_type, attr->flags,
                  attr->tags);
            return BAD_VALUE;
        }
        attributes = *attr;
    } else {
    
    
        if (*stream < AUDIO_STREAM_MIN || *stream >= AUDIO_STREAM_PUBLIC_CNT) {
    
    
            ALOGE("getOutputForAttr():  invalid stream type");
            return BAD_VALUE;
        }
        stream_type_to_audio_attributes(*stream, &attributes);
    }
	//根据attributes获取策略strategy,即获取类别/组
	routing_strategy strategy = (routing_strategy) getStrategyForAttr(&attributes);
	//根据strategy获取device,即根据类别/组获取播放设备(耳机、蓝牙、外放喇叭)
    audio_devices_t device = getDeviceForStrategy(strategy, false /*fromCache*/);
	//哪些output上有对应当device
	*output = getOutputForDevice(device, session, *stream,
                                 samplingRate, format, channelMask,
                                 flags, offloadInfo);
}
//@AudioPolicyManager.cpp
audio_io_handle_t AudioPolicyManager::getOutputForDevice(
        audio_devices_t device,
        audio_session_t session,
        audio_stream_type_t stream,
        uint32_t samplingRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        audio_output_flags_t flags,
        const audio_offload_info_t *offloadInfo)
{
    
    
	audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;
	//......
	status = mpClientInterface->openOutput(profile->getModuleHandle(),
                                               &output,
                                               &config,
                                               &outputDesc->mDevice,
                                               address,
                                               &outputDesc->mLatency,
                                               outputDesc->mFlags);
	//......
	return output;
}
//@AudioPolicyClientImpl.cpp
status_t AudioPolicyService::AudioPolicyClient::openOutput(audio_module_handle_t module,
                                                           audio_io_handle_t *output,
                                                           audio_config_t *config,
                                                           audio_devices_t *devices,
                                                           const String8& address,
                                                           uint32_t *latencyMs,
                                                           audio_output_flags_t flags)
{
    
    
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
    //af即AudioFlinger,这里即是AudioFlinger::openOutput
    return af->openOutput(module, output, config, devices, address, latencyMs, flags);
}

getOutputForAttr obtains the output channel of the output ( output can be understood as a representative of the audio channel of the hal layer, including primary out, lowlatency out, offload, direct_pcm, a2dp output, usb_device output, dp output, etc.), and the process of AudioFlinger::openOutput can be Refer to this article: Android audio subsystem (1) ------ openOutput open process

Therefore, in the end, getOutputForAttr will pass attr, streamType and other parameters, select device to get OutputFor (in fact, AudioPolicyClient::openOutput will return the opened device)

sp<IAudioTrack> AudioFlinger::createTrack(
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t *frameCount,
        audio_output_flags_t *flags,
        const sp<IMemory>& sharedBuffer,
        audio_io_handle_t output,
        pid_t pid,
        pid_t tid,
        audio_session_t *sessionId,
        int clientUid,
        status_t *status)
{
    
    
    sp<PlaybackThread::Track> track;
    sp<TrackHandle> trackHandle;
    sp<Client> client;

	PlaybackThread *thread = checkPlaybackThread_l(output);

	track = thread->createTrack_l(client, streamType, sampleRate, format,
                channelMask, frameCount, sharedBuffer, lSessionId, flags, tid, clientUid, &lStatus);

	// return handle to client
    trackHandle = new TrackHandle(track);
}
  • 1. Use checkPlaybackThread_l to find its corresponding PlaybackThread according to the audio_io_handle_t type parameter (they are corresponding relationships)
  • 2. Call PlaybackThread::createTrack_l, which creates a Track object and adds it to mTracks. When the Track is constructed, allocate a block of memory for data exchange between AudioFlinger and AudioTrack (audio_track_cblk_t* mCblk is the control block), and create an AudioTrackServerProxy /StaticAudioTrackServerProxy object to manage the Buffer (PlaybackThread will use it to obtain the readable data location from the Buffer)
  • 3. Create a Track communication agent TrackHandle and assign it to trackHandle

So here we can also see that creating an AudioTrack object will cause a Track object to be created in a certain PlaybackThread, and they are corresponding relationships.

//@Threads.cpp
sp<AudioFlinger::PlaybackThread::Track> AudioFlinger::PlaybackThread::createTrack_l(
        const sp<AudioFlinger::Client>& client,
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t *pFrameCount,
        const sp<IMemory>& sharedBuffer,
        audio_session_t sessionId,
        audio_output_flags_t *flags,
        pid_t tid,
        int uid,
        status_t *status)
{
    
    
	//......
	track = new Track(this, client, streamType, sampleRate, format,
                          channelMask, frameCount, NULL, sharedBuffer,
                          sessionId, uid, *flags, TrackBase::TYPE_DEFAULT);
    //PlaybackThread中存在一个数组mTracks,其中包含一个或者多个Track
    //每一个Track都对应应用程序中创建的AudioTrack
	mTracks.add(track);
}
//@Tracks.cpp
AudioFlinger::PlaybackThread::Track::Track(
            PlaybackThread *thread,
            const sp<Client>& client,
            audio_stream_type_t streamType,
            uint32_t sampleRate,
            audio_format_t format,
            audio_channel_mask_t channelMask,
            size_t frameCount,
            void *buffer,
            const sp<IMemory>& sharedBuffer,
            audio_session_t sessionId,
            int uid,
            audio_output_flags_t flags,
            track_type type)
    :   TrackBase(thread, client, sampleRate, format, channelMask, frameCount,
                  (sharedBuffer != 0) ? sharedBuffer->pointer() : buffer,
                  sessionId, uid, true /*isOut*/,
                  (type == TYPE_PATCH) ? ( buffer == NULL ? ALLOC_LOCAL : ALLOC_NONE) : ALLOC_CBLK,
                  type),
    mFillingUpStatus(FS_INVALID),
    // mRetryCount initialized later when needed
        mSharedBuffer(sharedBuffer),
    mStreamType(streamType),
    mName(-1),  // see note below
    mMainBuffer(thread->mixBuffer()),
    mAuxBuffer(NULL),
    mAuxEffectId(0), mHasVolumeController(false),
    mPresentationCompleteFrames(0),
    mFrameMap(16 /* sink-frame-to-track-frame map memory */),
    // mSinkTimestamp
    mFastIndex(-1),
    mCachedVolume(1.0),
    mIsInvalid(false),
    mAudioTrackServerProxy(NULL),
    mResumeToStopping(false),
    mFlushHwPending(false),
    mFlags(flags)
{
    
    
	if (sharedBuffer == 0) {
    
    
        mAudioTrackServerProxy = new AudioTrackServerProxy(mCblk, mBuffer, frameCount,
                mFrameSize, !isExternalTrack(), sampleRate);
    } else {
    
    
        mAudioTrackServerProxy = new StaticAudioTrackServerProxy(mCblk, mBuffer, frameCount,
                mFrameSize);
    }
    mServerProxy = mAudioTrackServerProxy;
}
//@Tracks.cpp
AudioFlinger::ThreadBase::TrackBase::TrackBase(
            ThreadBase *thread,
            const sp<Client>& client,
            uint32_t sampleRate,
            audio_format_t format,
            audio_channel_mask_t channelMask,
            size_t frameCount,
            void *buffer,
            audio_session_t sessionId,
            int clientUid,
            bool isOut,
            alloc_type alloc,
            track_type type)
    :   RefBase(),
        mThread(thread),
        mClient(client),
        mCblk(NULL),
        // mBuffer
        mState(IDLE),
        mSampleRate(sampleRate),
        mFormat(format),
        mChannelMask(channelMask),
        mChannelCount(isOut ?
                audio_channel_count_from_out_mask(channelMask) :
                audio_channel_count_from_in_mask(channelMask)),
        mFrameSize(audio_has_proportional_frames(format) ?
                mChannelCount * audio_bytes_per_sample(format) : sizeof(int8_t)),
        mFrameCount(frameCount),
        mSessionId(sessionId),
        mIsOut(isOut),
        mServerProxy(NULL),
        mId(android_atomic_inc(&nextTrackId)),
        mTerminated(false),
        mType(type),
        mThreadIoHandle(thread->id())
{
    
    
    size_t size = sizeof(audio_track_cblk_t);
    size_t bufferSize = (buffer == NULL ? roundup(frameCount) : frameCount) * mFrameSize;
    /*如果buffer为空,并且alloc == ALLOC_CBLK*/
    if (buffer == NULL && alloc == ALLOC_CBLK) {
    
    
    	/*size为一个头部(起到控制作用),如果buffer为NULL,即应用程序没有分配,则大小增加bufferSize*/
        size += bufferSize;
    }

    if (client != 0) {
    
    
    	/*分配内存*/
        mCblkMemory = client->heap()->allocate(size);
    } else {
    
    
        // this syntax avoids calling the audio_track_cblk_t constructor twice
        mCblk = (audio_track_cblk_t *) new uint8_t[size];
        // assume mCblk != NULL
    }
    // construct the shared structure in-place.
    if (mCblk != NULL) {
    
    
        new(mCblk) audio_track_cblk_t();
        switch (alloc) {
    
    
        case ALLOC_READONLY: {
    
    
            const sp<MemoryDealer> roHeap(thread->readOnlyHeap());
            if (roHeap == 0 ||
                    (mBufferMemory = roHeap->allocate(bufferSize)) == 0 ||
                    (mBuffer = mBufferMemory->pointer()) == NULL) {
    
    
                mCblkMemory.clear();
                mBufferMemory.clear();
                return;
            }
            memset(mBuffer, 0, bufferSize);
            } break;
        case ALLOC_PIPE:
            mBufferMemory = thread->pipeMemory();
            // mBuffer is the virtual address as seen from current process (mediaserver),
            // and should normally be coming from mBufferMemory->pointer().
            // However in this case the TrackBase does not reference the buffer directly.
            // It should references the buffer via the pipe.
            // Therefore, to detect incorrect usage of the buffer, we set mBuffer to NULL.
            mBuffer = NULL;
            break;
        case ALLOC_CBLK:
            // clear all buffers
            if (buffer == NULL) {
    
    
                mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t);
                memset(mBuffer, 0, bufferSize);
            } else {
    
    
                mBuffer = buffer;
            }
            break;
        case ALLOC_LOCAL:
            mBuffer = calloc(1, bufferSize);
            break;
        case ALLOC_NONE:
            mBuffer = buffer;
            break;
        }
}

In the Track constructor above, if the application is in MODE_STREAM mode, an AudioTrackServerProxy will be created to manage the Buffer, otherwise a StaticAudioTrackServerProxy will be created in the MODE_STATIC mode to manage the Buffer.

As I said before: APP creates AudioTrack, and then AudioFlinger::playbackThread creates the corresponding Track. They pass data through shared memory:
Track uses AudioTrackServerProxy/StaticAudioTrackServerProxy to manage Buffer, and correspondingly, AudioTrack uses AudioTrackClientProxy/StaticAudioTrackClientProxy to manage Buffer in AudioTrack::set.

AudioTrackServerProxy/StaticAudioTrackServerProxy in Track are all inherited from ServerProxy, which is used to manage shared memory, which contains obtainBuffer, releaseBuffer functions, playbackThread uses obtiainBuffer to obtain memory containing data, and uses releaseBuffer to release the data after consumption

//@AudioTrackShared.h
// Proxy used by AudioFlinger server
class ServerProxy : public Proxy {
    
    
	virtual status_t    obtainBuffer(Buffer* buffer, bool ackFlush = false);
	virtual void        releaseBuffer(Buffer* buffer);
}

Android is too big, too many processes. . . . . .

In general, AudioTrack can be simply divided into three steps:
1 Use the properties of AudioTrack to find the corresponding output and playbackThread according to the AudioPolicy
2 Create the corresponding track in the playbackThread
3 Between the AudioTrack of the APP and the track in the mTracks of the playbackThread Create shared memory

Guess you like

Origin blog.csdn.net/Guet_Kite/article/details/114535241