Android Framework Audio Subsystem (06) AudioTrack creation

This series of articles Master link: Thematic sub-directory Android Framework Class Audio Subsystem


Summary and description of key points in this chapter:

This chapter mainly focuses on the analysis of the sub-branch constructor of the process analysis of the AudioTrack part of the upper left of the above mind map. It mainly analyzes the NativeTrack AudioTrack constructor in detail. From the analysis in the previous section, we can know that the AudioTrack in the Java layer ultimately calls the AudioTrack in the Native layer, so the core of our analysis is the AudioTrack in the Native layer. This chapter mainly analyzes how AudioTrack is related to output and playbackthread.


1 AudioTrack constructor analysis

The constructor code of the AudioTrack object in the C ++ layer is as follows:

AudioTrack::AudioTrack()//无参对象,后再调用set方法
    : mStatus(NO_INIT),
      mIsTimed(false),
      mPreviousPriority(ANDROID_PRIORITY_NORMAL),
      mPreviousSchedulingGroup(SP_DEFAULT),
      mPausedPosition(0)
{
    mAttributes.content_type = AUDIO_CONTENT_TYPE_UNKNOWN;
    mAttributes.usage = AUDIO_USAGE_UNKNOWN;
    mAttributes.flags = 0x0;
    strcpy(mAttributes.tags, "");
}

AudioTrack::AudioTrack(//有参对象,不需再调用set方法
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t frameCount,
        audio_output_flags_t flags,
        callback_t cbf,
        void* user,
        uint32_t notificationFrames,
        int sessionId,
        transfer_type transferType,
        const audio_offload_info_t *offloadInfo,
        int uid,
        pid_t pid,
        const audio_attributes_t* pAttributes)
    : mStatus(NO_INIT),
      mIsTimed(false),
      mPreviousPriority(ANDROID_PRIORITY_NORMAL),
      mPreviousSchedulingGroup(SP_DEFAULT),
      mPausedPosition(0)
{
    mStatus = set(streamType, sampleRate, format, channelMask,
            frameCount, flags, cbf, user, notificationFrames,
            0 /*sharedBuffer*/, false /*threadCanCallJava*/, sessionId, transferType,
            offloadInfo, uid, pid, pAttributes);
}

You can see that there are two situations for the use of AudioTrack:

  • No parameter constructor, later need to execute the set method to set parameters.
  • There is a parameter constructor, which will directly call the set method to set the parameters.

2 detailed analysis of set function

Next, analyze the set method (core method) of AudioTrack. The code of the set function is as follows:

status_t AudioTrack::set(
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        //...
        const audio_attributes_t* pAttributes)
{
    //设置音频数据传输类型...
    mSharedBuffer = sharedBuffer;
    mTransfer = transferType;

    AutoMutex lock(mLock);

    // invariant that mAudioTrack != 0 is true only after set() returns successfully
    if (mAudioTrack != 0) {
        return INVALID_OPERATION;
    }

    //音频流类型设置,程序会设为默认值AUDIO_STREAM_MUSIC
    if (streamType == AUDIO_STREAM_DEFAULT) {
        streamType = AUDIO_STREAM_MUSIC;
    }
    if (pAttributes == NULL) {
        if (uint32_t(streamType) >= AUDIO_STREAM_PUBLIC_CNT) {
            ALOGE("Invalid stream type %d", streamType);
            return BAD_VALUE;
        }
        mStreamType = streamType;
    } else {
        // stream type shouldn't be looked at, this track has audio attributes
        memcpy(&mAttributes, pAttributes, sizeof(audio_attributes_t));
        mStreamType = AUDIO_STREAM_DEFAULT;
    }

    //音频格式设置,采样深度默认为16bit
    if (format == AUDIO_FORMAT_DEFAULT) {
        format = AUDIO_FORMAT_PCM_16_BIT;
    }

    // validate parameters
    if (!audio_is_valid_format(format)) {
        ALOGE("Invalid format %#x", format);
        return BAD_VALUE;
    }
    mFormat = format;

    //输出声道合法性检查
    if (!audio_is_output_channel(channelMask)) {
        ALOGE("Invalid channel mask %#x", channelMask);
        return BAD_VALUE;
    }
    mChannelMask = channelMask;
    uint32_t channelCount = audio_channel_count_from_out_mask(channelMask);
    mChannelCount = channelCount;
    //...

    //根据音频流类型从AudioPolicyService中得到对应的音频采样率
    if (sampleRate == 0 && (flags & AUDIO_OUTPUT_FLAG_DIRECT) != 0) {
        return BAD_VALUE;
    }
    mSampleRate = sampleRate;

    if (offloadInfo != NULL) {
        mOffloadInfoCopy = *offloadInfo;
        mOffloadInfo = &mOffloadInfoCopy;
    } else {
        mOffloadInfo = NULL;
    }
    
    //左右声道初始音量都设置成最大
    mVolume[AUDIO_INTERLEAVE_LEFT] = 1.0f;
    mVolume[AUDIO_INTERLEAVE_RIGHT] = 1.0f;
    mSendLevel = 0.0f;
    // mFrameCount is initialized in createTrack_l
    mReqFrameCount = frameCount;
    mNotificationFramesReq = notificationFrames;
    mNotificationFramesAct = 0;
    if (sessionId == AUDIO_SESSION_ALLOCATE) {
        mSessionId = AudioSystem::newAudioUniqueId();
    } else {
        mSessionId = sessionId;
    }
    int callingpid = IPCThreadState::self()->getCallingPid();
    int mypid = getpid();
    if (uid == -1 || (callingpid != mypid)) {
        mClientUid = IPCThreadState::self()->getCallingUid();
    } else {
        mClientUid = uid;
    }
    if (pid == -1 || (callingpid != mypid)) {
        mClientPid = callingpid;
    } else {
        mClientPid = pid;
    }
    mAuxEffectId = 0;
    mFlags = flags;
    mCbf = cbf;
    //如果设置了提供音频数据的回调函数,则启动AudioTrackThread线程来提供音频数据
    if (cbf != NULL) {
        /*AudioTrackThread实现两个核心功能:
         *1 AudioTrack与AudioFlinger之间 数据传输,AudioFlinger启动了一个线程专门用于接收客户端的
         *  音频数据,同时客户端也需要一个线程来“不断”的传送音频数据
         *2 用于报告数据的传输状态,AudioTrack中保存了一个callback_t类型的回调函数(即全局变量mCbf)
         *  用于事件发生时进行回传
         */
        mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);
        //运行线程
        mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);
    }

    //关键点:create the IAudioTrack
    status_t status = createTrack_l();

    if (status != NO_ERROR) {
        if (mAudioTrackThread != 0) {
            mAudioTrackThread->requestExit();   // see comment in AudioTrack.h
            mAudioTrackThread->requestExitAndWait();
            mAudioTrackThread.clear();
        }
        return status;
    }
    //...
    return NO_ERROR;
}

Here we focus on analyzing the createTrack_l method, the code is implemented as follows:

status_t AudioTrack::createTrack_l()
{
    const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();
    //...
    /*audio_io_handle_t是一个通过typedef定义在audio.h中的int类型,这个值主要被AudioFlinger使用
     *用来表示内部的工作线程的索引,AudioFlinger会根据情况创建几个工作线程
     *AudioSystem::getOutputForAttr会根据流类型等参数选取一个合适的工作线程
     *并将它在AF中的索引号保存在output变量中,AudioTrack一般使用混音线程(MixerThread)
     */
    audio_io_handle_t output;
    audio_stream_type_t streamType = mStreamType;
    audio_attributes_t *attr = (mStreamType == AUDIO_STREAM_DEFAULT) ? &mAttributes : NULL;
    //关键点1:通过属性获取output。
    status_t status = AudioSystem::getOutputForAttr(attr, &output,
                                                    (audio_session_t)mSessionId, &streamType,
                                                    mSampleRate, mFormat, mChannelMask,
                                                    mFlags, mOffloadInfo);
    //...
    uint32_t afLatency;
    status = AudioSystem::getLatency(output, &afLatency);
    if (status != NO_ERROR) {
        ALOGE("getLatency(%d) failed status %d", output, status);
        goto release;
    }

    size_t afFrameCount;
    status = AudioSystem::getFrameCount(output, &afFrameCount);
    if (status != NO_ERROR) {
        ALOGE("getFrameCount(output=%d) status %d", output, status);
        goto release;
    }

    uint32_t afSampleRate;
    status = AudioSystem::getSamplingRate(output, &afSampleRate);
    if (status != NO_ERROR) {
        ALOGE("getSamplingRate(output=%d) status %d", output, status);
        goto release;
    }
    if (mSampleRate == 0) {
        mSampleRate = afSampleRate;
    }
    //...
    size_t temp = frameCount;   // temp may be replaced by a revised value of frameCount,
                                // but we will still need the original value also
    /*关键点2: 创建Track,这里返回AudioFlinger内部的AudioTrack的binder代理Track
     *它是联系AudioTrack和AudioFlinger的关键纽带
     */
    sp<IAudioTrack> track = audioFlinger->createTrack(streamType,
                                                      mSampleRate,
                                                      //...
                                                      mClientUid,
                                                      &status);
    //...
    /*获取 track变量 的共享内存 buffer
     *当PlaybackThread创建一个PlaybackThread::Track对象时,所需的缓冲区空间
     *就已经分配了,这块空间是可以跨进程共享的,所以AudioTrack可以通过track->getCblk
     *来获取共享内存了。
     */
    sp<IMemory> iMem = track->getCblk();//向AudioFlinger申请数据缓冲空间
    if (iMem == 0) {
        ALOGE("Could not get control block");
        return NO_INIT;
    }
    void *iMemPointer = iMem->pointer();
    if (iMemPointer == NULL) {
        ALOGE("Could not get control block pointer");
        return NO_INIT;
    }
    // invariant that mAudioTrack != 0 is true only after set() returns successfully
    if (mAudioTrack != 0) {
        mAudioTrack->asBinder()->unlinkToDeath(mDeathNotifier, this);
        mDeathNotifier.clear();
    }

    //与AudioFlinger进行通信的中介
    mAudioTrack = track;
    mCblkMemory = iMem;
    IPCThreadState::self()->flushCommands();

    audio_track_cblk_t* cblk = static_cast<audio_track_cblk_t*>(iMemPointer);
    mCblk = cblk;
    //...
    // update proxy,APP的AudioTack 与Thread中的Track建立共享内存
    // Proxy类封装了 track 共享 buffer 的操控接口,实现共享 buffer 的使用
    if (mSharedBuffer == 0) {
        mStaticProxy.clear();
        mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
    } else {
        mStaticProxy = new StaticAudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
        mProxy = mStaticProxy;
    }
    //调用 Proxy 的 set 接口,设置保存 VolumeLR,SampleRate,SendLevel 等参数,
    //AudioFlinger mixer 线程中会把这些参数取出来实现混音
    mProxy->setVolumeLR(gain_minifloat_pack(
            gain_from_float(mVolume[AUDIO_INTERLEAVE_LEFT]),
            gain_from_float(mVolume[AUDIO_INTERLEAVE_RIGHT])));

    mProxy->setSendLevel(mSendLevel);
    mProxy->setSampleRate(mSampleRate);
    mProxy->setMinimum(mNotificationFramesAct);

    mDeathNotifier = new DeathNotifier(this);
    mAudioTrack->asBinder()->linkToDeath(mDeathNotifier, this);

    return NO_ERROR;
    }

release:
    AudioSystem::releaseOutput(output, streamType, (audio_session_t)mSessionId);
    if (status == NO_ERROR) {
        status = NO_INIT;
    }
    return status;
}

2.1 Select output through Attr

Here we focus on analyzing the implementation of AudioSystem :: getOutputForAttr, the code is as follows:

status_t AudioSystem::getOutputForAttr(const audio_attributes_t *attr,
                                        audio_io_handle_t *output,
                                        audio_session_t session,
                                        audio_stream_type_t *stream,
                                        uint32_t samplingRate,
                                        audio_format_t format,
                                        audio_channel_mask_t channelMask,
                                        audio_output_flags_t flags,
                                        const audio_offload_info_t *offloadInfo)
{
    const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
    if (aps == 0) return NO_INIT;
    return aps->getOutputForAttr(attr, output, session, stream,
                                 samplingRate, format, channelMask,
                                 flags, offloadInf_o);
}

Here mainly calls getOutputForAttr of AudioPolicyManager, the code implementation is as follows:

status_t AudioPolicyManager::getOutputForAttr(const audio_attributes_t *attr,
                                              audio_io_handle_t *output,
                                              audio_session_t session,
                                              audio_stream_type_t *stream,
                                              uint32_t samplingRate,
                                              audio_format_t format,
                                              audio_channel_mask_t channelMask,
                                              audio_output_flags_t flags,
                                              const audio_offload_info_t *offloadInfo)
{
    //...
    //这里通过Attr声音属性 获取strategy类别
    routing_strategy strategy = (routing_strategy) getStrategyForAttr(&attributes);
    //这里通过strategy类别,确定播放设备(耳机/蓝牙/喇叭)
    audio_devices_t device = getDeviceForStrategy(strategy, false /*fromCache*/);

    if ((attributes.flags & AUDIO_FLAG_HW_AV_SYNC) != 0) {
        flags = (audio_output_flags_t)(flags | AUDIO_OUTPUT_FLAG_HW_AV_SYNC);
    }
    //根据属性信息,获取流
    *stream = streamTypefromAttributesInt(&attributes);
    //根据设备device,去查找哪些output上有这些device,然后从output中找到最合适的那个
    *output = getOutputForDevice(device, session, *stream,
                                 samplingRate, format, channelMask,
                                 flags, offloadInfo);
    if (*output == AUDIO_IO_HANDLE_NONE) {
        return INVALID_OPERATION;
    }
    return NO_ERROR;
}

Summarize the process from APP to construct AudioTrack to select output:

  1. The stream type was specified when the APP constructed the AudioTrack
  2. AudioTrack::setAttributesFromStreamType
  3. AudioPolicyManager::getStrategyForAttr
  4. AudioPolicyManager::getDeviceForStrategy
  5. AudioPolicyManager::getOutputForDevice->AudioPolicyManager::getOutputsForDevice->output = selectOutput(outputs, flags, format);

Here is a brief overview of the logic of getOutputForDevice: because it is possible that multiple outputs support a certain device, how to extract the most suitable AudioPolicyManager :: selectOutput from multiple outputs?

  1. First of all: the APP will pass the flag when creating the AudioTrack, and the flag (audio_policy.conf) corresponding to the output also has a flag; use these two types of flags to compare and take out the output with the highest degree of agreement.
  2. Second: if the previous flags are repeated after comparison, continue to compare the batches of outputs with the highest agreement; if the primary output supports the device, select it.
  3. Finally: If the most suitable one cannot be selected, the current one is selected.

2.2 Find the corresponding playbackThread according to the output

@ 1 createTrack analysis

Find the corresponding playbackThread through output and create the corresponding track in the playbackThread. The code of createTrack is as follows:

sp<IAudioTrack> AudioFlinger::createTrack(
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        //...
        int clientUid,
        status_t *status)
{
    sp<PlaybackThread::Track> track;
    sp<TrackHandle> trackHandle;
    sp<Client> client;
    status_t lStatus;
    int lSessionId;
    //...
    {
        Mutex::Autolock _l(mLock);
        //关键点1:根据output确定PlaybackThread
        PlaybackThread *thread = checkPlaybackThread_l(output);
        if (thread == NULL) {
            ALOGE("no playback thread found for output handle %d", output);
            lStatus = BAD_VALUE;
            goto Exit;
        }

        pid_t pid = IPCThreadState::self()->getCallingPid();
        client = registerPid(pid);
        //...
        //关键点2:根据线程来执行createTrack_l来创建一个track
        track = thread->createTrack_l(client, streamType, sampleRate, format,
                channelMask, frameCount, sharedBuffer, lSessionId, flags, tid, clientUid, &lStatus);
        //...
    }
    //...
    // return handle to client
    trackHandle = new TrackHandle(track);
Exit:
    *status = lStatus;
    return trackHandle;
}

Focus on analyzing checkPlaybackThread_l, the code implementation is as follows:

AudioFlinger::PlaybackThread *AudioFlinger::checkPlaybackThread_l(audio_io_handle_t output) const
{
    return mPlaybackThreads.valueFor(output).get();
}

Here is to obtain the corresponding PlaybackThread type thread through output.

@ 2 createTrack_l analysis

Next, focus on the analysis of createTrack_l, the code implementation is as follows:

sp<AudioFlinger::PlaybackThread::Track> AudioFlinger::PlaybackThread::createTrack_l(
        const sp<AudioFlinger::Client>& client,
        //...
        int uid,
        status_t *status)
{
    size_t frameCount = *pFrameCount;
    sp<Track> track;
    status_t lStatus;

    bool isTimed = (*flags & IAudioFlinger::TRACK_TIMED) != 0;

    //...
        if (!isTimed) {
            //创建新的Track
            track = new Track(this, client, streamType, sampleRate, format,
                              channelMask, frameCount, NULL, sharedBuffer,
                              sessionId, uid, *flags, TrackBase::TYPE_DEFAULT);
        } else {
            track = TimedTrack::create(this, client, streamType, sampleRate, format,
                    channelMask, frameCount, sharedBuffer, sessionId, uid);
        }

        // new Track always returns non-NULL,
        // but TimedTrack::create() is a factory that could fail by returning NULL
        lStatus = track != 0 ? track->initCheck() : (status_t) NO_MEMORY;
        if (lStatus != NO_ERROR) {
            ALOGE("createTrack_l() initCheck failed %d; no control block?", lStatus);
            // track must be cleared from the caller as the caller has the AF lock
            goto Exit;
        }
        //将track加入到PlayBackThread的mTracks表中。
        mTracks.add(track);
    //...
    lStatus = NO_ERROR;

Exit:
    *status = lStatus;
    return track;
}

Here mainly the playbackthread creates a new track and adds the track to its mTracks table.

 

 

 

 

Published 289 original articles · praised 47 · 30,000+ views

Guess you like

Origin blog.csdn.net/vviccc/article/details/105310881