Android audio subsystem (5) ------ AudioFlinger processing flow

Hello! Here is Kite's blog,

Welcome to communicate with me.


As the audio system engine of Android, AndioFlinger is responsible for the management of input and output stream devices and the processing and transmission of audio stream data, which is performed by the playback thread (PlaybackThread and its derived subclasses) and the recording thread (RecordThread).
The base class of playing thread is Thread:
thread

  • class Thread : virtual public RefBase (system/core/include/utils/Thread.h)
  • class ThreadBase : public Thread(frameworks/av/services/audioflinger/Threads.h)
  • class PlaybackThread : public ThreadBase(frameworks/av/services/audioflinger/Threads.h)
  • class MixerThread : public PlaybackThread(frameworks/av/services/audioflinger/Threads.h)

Before explaining the Android audio subsystem (1)---openOutput open process , a MixerThread object will be created in openOutput_l:

sp<AudioFlinger::PlaybackThread> AudioFlinger::openOutput_l(audio_module_handle_t module,
                                                            audio_io_handle_t *output,
                                                            audio_config_t *config,
                                                            audio_devices_t devices,
                                                            const String8& address,
                                                            audio_output_flags_t flags)
{
    
    

    if (status == NO_ERROR) {
    
    
		//创建播放线程
        PlaybackThread *thread;
        if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
    
    
            thread = new OffloadThread(this, outputStream, *output, devices, mSystemReady);
            ALOGV("openOutput_l() created offload output: ID %d thread %p", *output, thread);
        } else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)
                || !isValidPcmSinkFormat(config->format)
                || !isValidPcmSinkChannelMask(config->channel_mask)) {
    
    
            thread = new DirectOutputThread(this, outputStream, *output, devices, mSystemReady);
            ALOGV("openOutput_l() created direct output: ID %d thread %p", *output, thread);
        } else {
    
    
            thread = new MixerThread(this, outputStream, *output, devices, mSystemReady);
            ALOGV("openOutput_l() created mixer output: ID %d thread %p", *output, thread);
        }
        //添加到mPlaybackThreads中
        mPlaybackThreads.add(*output, thread);//添加播放线程
        return thread;
    }

}

After creating MixerThread, add it to mPlaybackThreads.
We know:
There are two global variables used to record the Record and Playback threads in AudioFlinger , as follows:

DefaultKeyedVector< audio_io_handle_t, sp<PlaybackThread>>  mPlaybackThreads;
DefaultKeyedVector< audio_io_handle_t, sp<RecordThread>>    mRecordThreads;

First look at the constructor of MixerThread:


AudioFlinger::MixerThread::MixerThread(const sp<AudioFlinger>& audioFlinger, AudioStreamOut* output,
        audio_io_handle_t id, audio_devices_t device, bool systemReady, type_t type)
    :   PlaybackThread(audioFlinger, output, id, device, type, systemReady),
        // mAudioMixer below
        // mFastMixer below
        mFastMixerFutex(0),
        mMasterMono(false)
        // mOutputSink below
        // mPipeSink below
        // mNormalSink below
{
    
    
	mAudioMixer = new AudioMixer(mNormalFrameCount, mSampleRate);
	mOutputSink = new AudioStreamOutSink(output->stream);
	mOutputSink->negotiate(offers, 1, NULL, numCounterOffers);
	    // initialize fast mixer depending on configuration
    bool initFastMixer;
    switch (kUseFastMixer) {
    
    
    case FastMixer_Never:
        initFastMixer = false;
        break;
    case FastMixer_Always:
        initFastMixer = true;
        break;
    case FastMixer_Static:
    case FastMixer_Dynamic:
        initFastMixer = mFrameCount < mNormalFrameCount;
        break;
    }
}

There are mainly two classes created in it, one is the AudioMixer object mAudioMixer, which is the key to the mixing process. There is also an AudioStreamOutSink object mOutputSink, and negotiate. Finally, judge whether to use fast mixer according to the configuration (initFastMixer).

Normally, the task of a playback thread is to process the audio data playback request of the upper layer in a loop, then pass it to the next layer, and finally write it to the hardware device, so theoretically there should be a thread loop.

From the previous MixerThread inheritance relationship, its parent class has a strong pointer RefBase. According to the characteristics of the strong pointer, the target object will call onFirstRef when it is referenced for the first time, so onFirstRef will be called when it is referenced for the first time. method.

//@Threads.cpp
void AudioFlinger::PlaybackThread::onFirstRef()
{
    
    
    run(mThreadName, ANDROID_PRIORITY_URGENT_AUDIO);
}

It's a very simple function, which calls the run method. Let's look at the specific code implementation:

//@Threads.cpp
status_t Thread::run(const char* name, int32_t priority, size_t stack)
{
    
    
//这个函数一方面CreateThread创建一个线程,另一方面_threadLoop方法,这个_threadLoop方法中将调用子类的threadLoop,并判断是否结束循环,
    if (mCanCallJava) {
    
    
        res = createThreadEtc(_threadLoop,
                this, name, priority, stack, &mThread);
    } else {
    
    
        res = androidCreateRawThreadEtc(_threadLoop,
                this, name, priority, stack, &mThread);
    }
}

Here, a new thread is started to call the subclass _threadLoop, which is the threadloop of playbackthread

int Thread::_threadLoop(void* user)
{
    
    
	do {
    
    
        bool result;
        if (first) {
    
    
            first = false;
            self->mStatus = self->readyToRun();
            result = (self->mStatus == NO_ERROR);

            if (result && !self->exitPending()) {
    
    
                result = self->threadLoop();
            }
        } else {
    
    
            result = self->threadLoop();
        }
	} while(strong != 0);
}

The self here is the thread subclass object. If threadLoop returns false, or the subclass exits by itself, it will jump out of the while loop.

Generally speaking, it is relatively simple in run, which is to start a new thread and indirectly call threadLoop, and then process the mixing business in a loop.

In this way, the establishment of the audio channel is completed, so that AudioTrack can input data to this channel.

Next, look at the processing process of the loop body threadloop of playbackthread:

bool AudioFlinger::PlaybackThread::threadLoop()
{
    
    
    while (!exitPending())
    {
    
    
    	//处理配置变更,当有配置改变的事件发生时,需要调用 sendConfigEvent_l() 来通知 PlaybackThread,
	    //这样 PlaybackThread 才能及时处理配置事件;常见的配置事件是切换音频通路;
        processConfigEvents_l();
        //......
	    //没有活跃音轨而且standbyTime过期或者需要需要Suspend,则进入进入standby
	    //一般创建MixerThread时,mActiveTracks肯定是空的,并且当前时间会超出standbyTime,所以会进入待机
		if ((!mActiveTracks.size() && systemTime() > mStandbyTimeNs) ||
								isSuspended()) {
    
    
                // put audio hardware into standby after short delay
                if (shouldStandby_l()) {
    
    
                    threadLoop_standby();
                    mStandby = true;
                }
				//没有可供使用的active track并且没有广播消息
                if (!mActiveTracks.size() && mConfigEvents.isEmpty()) {
    
    
                    // we're about to wait, flush the binder command buffer
                    IPCThreadState::self()->flushCommands();
                    clearOutputTracks();
                    //MixerThread会在这里睡眠等待,直到AudioTrack:: start发送广播唤醒
                    mWaitWorkCV.wait(mLock);
                    continue;
                }
        }
        // mMixerStatusIgnoringFastTracks is also updated internally
	    // 为音轨混音做准备
        mMixerStatus = prepareTracks_l(&tracksToRemove);

        if (mBytesRemaining == 0) {
    
    
            mCurrentWriteLength = 0;
	    	//只有当音轨准备就绪,才能进入到混音处理
            if (mMixerStatus == MIXER_TRACKS_READY) {
    
    
                // threadLoop_mix() sets mCurrentWriteLength
                threadLoop_mix();//混音处理
            } else if ((mMixerStatus != MIXER_DRAIN_TRACK)
                        && (mMixerStatus != MIXER_DRAIN_ALL)) {
    
    
                // threadLoop_sleepTime sets mSleepTimeUs to 0 if data
                // must be written to HAL
                threadLoop_sleepTime();//如果没准备好,休眠一段时间
                if (mSleepTimeUs == 0) {
    
    
                    mCurrentWriteLength = mSinkBufferSize;
                }
            }
            //......
		}
		//......
		if (!waitingAsyncCallback()) {
    
    
			// mSleepTimeUs == 0 means we must write to audio hardware
            if (mSleepTimeUs == 0) {
    
    
            		mLastWriteTime = systemTime();  // also used for dumpsys
                    ret = threadLoop_write();//里面实际在向硬件抽象层写数据,写入hal
                    lastWriteFinished = systemTime();
                    delta = lastWriteFinished - mLastWriteTime;//写数据花了多少时间
            } else if ((mMixerStatus == MIXER_DRAIN_TRACK) ||
                        (mMixerStatus == MIXER_DRAIN_ALL)) {
    
    
                    threadLoop_drain();
            }
            if (mType == MIXER && !mStandby) {
    
    
                    // write blocked detection
                    //写的太慢!maxPeriod是按FramCount和SampleRate算出来的,和硬件延时有关,即I2S播放完DMAbuffer数据需要的时间
                    if (delta > maxPeriod) {
    
    
                        mNumDelayedWrites++;
                        //发生xrun,输出log告知
                        if ((lastWriteFinished - lastWarning) > kWarningThrottleNs) {
    
    
                            ATRACE_NAME("underrun");
                            ALOGW("write blocked for %llu msecs, %d delayed writes, thread %p",
                                    (unsigned long long) ns2ms(delta), mNumDelayedWrites, this);
                            lastWarning = lastWriteFinished;
                        }
                    }
            }
		}
		threadLoop_removeTracks(tracksToRemove);//最后,移除相关track
		clearOutputTracks();
    }
    threadLoop_exit();
}

The whole function is relatively complicated, but the general process can be divided into three steps:
1. prepareTracks_l() checks whether there is a track status, and the track in the track status will be added to mActiveTracks and processed accordingly.
2. threadLoop_mix() mixing processing, mixing tracks that are active at the same time.
3. threadLoop_write() writes data, and writes it to hal for playback.
4. threadLoop_removeTracks() has released related tracks.

Look at the detailed process of the key steps:

prepareTracks_l prepares data:

AudioFlinger::PlaybackThread::mixer_state AudioFlinger::MixerThread::prepareTracks_l(
        Vector< sp<Track> > *tracksToRemove)
{
    
    
    // find out which tracks need to be processed
    size_t count = mActiveTracks.size();
    //循环处理每一个track
    for (size_t i=0 ; i<count ; i++) {
    
    
        const sp<Track> t = mActiveTracks[i].promote();
		//准备数据块
		audio_track_cblk_t* cblk = track->cblk();
		//回放音频,需要准备多少帧数据
		const uint32_t sampleRate = track->mAudioTrackServerProxy->getSampleRate();
        AudioPlaybackRate playbackRate = track->mAudioTrackServerProxy->getPlaybackRate();

        desiredFrames = sourceFramesNeededWithTimestretch(
                sampleRate, mNormalFrameCount, mSampleRate, playbackRate.mSpeed);
        desiredFrames += mAudioMixer->getUnreleasedFrames(track->name());

        uint32_t minFrames = 1;
        //track->sharedBuffer()为0,说明这个audiotrack不是static模式,也即是数据不是一次性传送的
        if ((track->sharedBuffer() == 0) && !track->isStopped() && !track->isPausing() &&
                (mMixerStatusIgnoringFastTracks == MIXER_TRACKS_READY)) {
    
    
            minFrames = desiredFrames;
        }
        //数据准备完毕,设置音量、设置一些参数
        size_t framesReady = track->framesReady();
        mAudioMixer->setParameter(name, param, AudioMixer::VOLUME0, &vlf);
    }

}

Damn, this is too much and too complicated, let's refer to other people's analysis, it can't be done!

After AudioFlinger::PlaybackThread::threadLoop() learns that the situation has changed, call
prepareTracks_l() to re-prepare the audio stream and the mixer: the Track in the ACTIVE state will be added to mActiveTracks, and the other
Tracks will be removed from mActiveTracks, and then Prepare AudioMixer again.

prepareTracks_l(): Prepare the audio stream and mixer. This function is very complicated. I won’t analyze it in detail here. I just list the main points of the process:

  • Traverse mActiveTracks, process the Tracks on mActiveTracks one by one, and check whether the Track is in ACTIVE state;
  • If the Track setting is ACTIVE, then check whether the data of the Track is ready;
  • Configure the mixer parameters according to the volume value, format, number of channels, audio track sampling rate, and hardware device sampling rate of the audio stream;
  • If the status of the Track is PAUSED or STOPPED, add the Track to the tracksToRemove vector;

If the data preparation of prepare_track_l has been completed, the mixing operation is started, that is, threadLoop_mix

void AudioFlinger::MixerThread::threadLoop_mix()
{
    
    
    // mix buffers...
    mAudioMixer->process();
}

The inside is to mix the sound, the specific content will be skipped first, and I will challenge again when I have a great skill!

After that, the mixed data is written into the hal layer, and then further written into the hardware device:

ssize_t AudioFlinger::MixerThread::threadLoop_write()
{
    
    
    if (mFastMixer != 0) {
    
    
        FastMixerStateQueue *sq = mFastMixer->sq();
        FastMixerState *state = sq->begin();
        if (state->mCommand != FastMixerState::MIX_WRITE &&
                (kUseFastMixer != FastMixer_Dynamic || state->mTrackMask > 1)) {
    
    
            if (state->mCommand == FastMixerState::COLD_IDLE) {
    
    
                // FIXME workaround for first HAL write being CPU bound on some devices
                mOutput->write((char *)mSinkBuffer, 0);
            }
        }
    }
    return PlaybackThread::threadLoop_write();
}

ssize_t AudioFlinger::PlaybackThread::threadLoop_write()
{
    
    
    // If an NBAIO sink is present, use it to write the normal mixer's submix
    if (mNormalSink != 0) {
    
    
       const size_t count = mBytesRemaining / mFrameSize;
       ssize_t framesWritten = mNormalSink->write((char *)mSinkBuffer + offset, count);
        if (framesWritten > 0) {
    
    
            bytesWritten = framesWritten * mFrameSize;
        } else {
    
    
            bytesWritten = framesWritten;
        }
    // otherwise use the HAL / AudioStreamOut directly
    } else {
    
    
       bytesWritten = mOutput->write((char *)mSinkBuffer + offset, mBytesRemaining);
	}
}

The main thing is to transfer to the write of the hal layer in ->write, and write the data to the bottom layer.
For details, please refer to this: Android audio subsystem (2) ------threadLoop_write data writing process

Finally, call: threadLoop_removeTracks to remove the tracks indicated in tracksToRemove. For the tracks in this list, their related outputs will receive a stop request (ie AudioSystem::stopOutput(…)).

void AudioFlinger::MixerThread::threadLoop_removeTracks(const Vector< sp<Track> >& tracksToRemove)
{
    
    
    PlaybackThread::threadLoop_removeTracks(tracksToRemove);
}

void AudioFlinger::PlaybackThread::threadLoop_removeTracks(
        const Vector< sp<Track> >& tracksToRemove)
{
    
    
    size_t count = tracksToRemove.size();
    if (count > 0) {
    
    
        for (size_t i = 0 ; i < count ; i++) {
    
    
            const sp<Track>& track = tracksToRemove.itemAt(i);
            if (track->isExternalTrack()) {
    
    
                AudioSystem::stopOutput(mId, track->streamType(),
                                        track->sessionId());
                if (track->isTerminated()) {
    
    
                    AudioSystem::releaseOutput(mId, track->streamType(),
                                               track->sessionId());
                }
            }
        }
    }
}

By the way, there is another important thing. It was written in the comment of AudioFlinger::PlaybackThread::threadLoop() that when the active track is empty, it will sleep, so when will it wake up?
In fact, it is in Track::start:

status_t AudioFlinger::PlaybackThread::Track::start(AudioSystem::sync_event_t event __unused,
                                                    audio_session_t triggerSession __unused)
{
    
    
	status = playbackThread->addTrack_l(this);
}


status_t AudioFlinger::PlaybackThread::addTrack_l(const sp<Track>& track)
{
    
    
	onAddNewTrack_l();
}


void AudioFlinger::PlaybackThread::onAddNewTrack_l()
{
    
    
    ALOGV("signal playback thread");
	//发送广播,唤醒线程
    broadcast_l();
}

ThreadLoop will be woken up here.

The three most commonly used interfaces for audio flow control:

  • AudioFlinger::PlaybackThread::Track::start: Start playing: set the Track to the ACTIVE state, then add it to the mActiveTracks vector, and finally call AudioFlinger::PlaybackThread::broadcast_l() to inform the PlaybackThread that the situation has changed
  • AudioFlinger::PlaybackThread::Track::stop: Stop playing: put the Track in the STOPPED state, and finally call AudioFlinger::PlaybackThread::broadcast_l() to inform PlaybackThread that the situation has changed
  • AudioFlinger::PlaybackThread::Track::pause: Pause playback: put the Track in the PAUSING state, and finally call AudioFlinger::PlaybackThread::broadcast_l() to inform PlaybackThread that the situation has changed

Each MixerThread has a unique corresponding AudioMixer, its function is to complete the audio mixing operation.
The external interface of AudioMixer mainly includes Parameter related (setParameter), Resampler (setResampler), Volume (adjustVolumeRamp), Buffer (setBufferProvider), and Track (getTrackName).

The core of AudioMixer is a variable mState of type state_t, and all mixing work will be reflected in this variable:

//@AudioMixer.h
    struct state_t {
    
    
        uint32_t        enabledTracks;
        uint32_t        needsChanged;
        size_t          frameCount;
        process_hook_t  hook;   // one of process__*, never NULL
        int32_t         *outputTemp;
        int32_t         *resampleTemp;
        NBLog::Writer*  mLog;
        int32_t         reserved[1];
        // FIXME allocate dynamically to save some memory when maxNumTracks < MAX_NUM_TRACKS
        track_t         tracks[MAX_NUM_TRACKS] __attribute__((aligned(32)));
    };

The size of the array tracks is MAX_NUM_TRACKS =32, which means that it supports up to 32 channels of simultaneous audio mixing, and its type track_t is a description of each track. The setParameter interface ultimately affects the properties of the track.

The threadloop in AudioFlinger continuously calls prepareTracks_l to prepare data. Each prepare is actually an adjustment to all Tracks. If there is a change in the property, AudioMixer will be notified through setParamter.

Earlier AudioMixer::process() called mState.hook(&mState); hook is a function pointer, which points to different function implementations according to different scenarios:

When AudioMixer is initialized, the hook points to process__nop;

mState.hook = process__nop;

When the state changes and the parameters change, the hook points to process__validate

//@AudioMixer.cpp
void AudioMixer::setParameter(int name, int target, int param, void *value)
{
    
    
	invalidateState(1 << name);
}

void AudioMixer::invalidateState(uint32_t mask)
{
    
    
	mState.hook = process__validate;
}

process__validate will point the hook to different functions according to different scenarios:

void AudioMixer::process__validate(state_t* state)
{
    
    
	//初始值
	state->hook = process__nop;
	//针对处于enable状态的track
	if (countActiveTracks > 0) {
    
    
		if (resampling) 
			state->hook = process__genericResampling; //重采样
		else
			state->hook = process__genericNoResampling;//不重采样
	}
}

Guess you like

Origin blog.csdn.net/Guet_Kite/article/details/114799171