Android Framework 音频子系统(06)AudioTrack创建

该系列文章总纲链接:专题分纲目录 Android Framework 音频子系统​​​​​​​


本章关键点总结 & 说明:

本章节主要关注➕ 以上思维导图左上 AudioTrack 部分流程分析 的 子分支 构造器分析 即可。主要是对Native层 AudioTrack的构造器进行了详细的分析。从上一节分析可以知道,Java层AudioTrack 最终也是调用Native层的AudioTrack,所以我们分析的核心就是Native层的AudioTrack。本章节主要分析AudioTrack是如何跟output和playbackthread建立关联的。


1 AudioTrack构造器分析

C++层的AudioTrack对象它的构造器代码如下:

AudioTrack::AudioTrack()//无参对象,后再调用set方法
    : mStatus(NO_INIT),
      mIsTimed(false),
      mPreviousPriority(ANDROID_PRIORITY_NORMAL),
      mPreviousSchedulingGroup(SP_DEFAULT),
      mPausedPosition(0)
{
    mAttributes.content_type = AUDIO_CONTENT_TYPE_UNKNOWN;
    mAttributes.usage = AUDIO_USAGE_UNKNOWN;
    mAttributes.flags = 0x0;
    strcpy(mAttributes.tags, "");
}

AudioTrack::AudioTrack(//有参对象,不需再调用set方法
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t frameCount,
        audio_output_flags_t flags,
        callback_t cbf,
        void* user,
        uint32_t notificationFrames,
        int sessionId,
        transfer_type transferType,
        const audio_offload_info_t *offloadInfo,
        int uid,
        pid_t pid,
        const audio_attributes_t* pAttributes)
    : mStatus(NO_INIT),
      mIsTimed(false),
      mPreviousPriority(ANDROID_PRIORITY_NORMAL),
      mPreviousSchedulingGroup(SP_DEFAULT),
      mPausedPosition(0)
{
    mStatus = set(streamType, sampleRate, format, channelMask,
            frameCount, flags, cbf, user, notificationFrames,
            0 /*sharedBuffer*/, false /*threadCanCallJava*/, sessionId, transferType,
            offloadInfo, uid, pid, pAttributes);
}

可以看到 AudioTrack的使用有两种情况:

  • 无参数构造器,后期需要再执行set方法 设置参数。
  • 有参数构造器,会直接调用set方法 设置参数。

2 set函数详细分析

接下来对AudioTrack的set方法(核心方法)进行分析,set函数的代码如下:

status_t AudioTrack::set(
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        //...
        const audio_attributes_t* pAttributes)
{
    //设置音频数据传输类型...
    mSharedBuffer = sharedBuffer;
    mTransfer = transferType;

    AutoMutex lock(mLock);

    // invariant that mAudioTrack != 0 is true only after set() returns successfully
    if (mAudioTrack != 0) {
        return INVALID_OPERATION;
    }

    //音频流类型设置,程序会设为默认值AUDIO_STREAM_MUSIC
    if (streamType == AUDIO_STREAM_DEFAULT) {
        streamType = AUDIO_STREAM_MUSIC;
    }
    if (pAttributes == NULL) {
        if (uint32_t(streamType) >= AUDIO_STREAM_PUBLIC_CNT) {
            ALOGE("Invalid stream type %d", streamType);
            return BAD_VALUE;
        }
        mStreamType = streamType;
    } else {
        // stream type shouldn't be looked at, this track has audio attributes
        memcpy(&mAttributes, pAttributes, sizeof(audio_attributes_t));
        mStreamType = AUDIO_STREAM_DEFAULT;
    }

    //音频格式设置,采样深度默认为16bit
    if (format == AUDIO_FORMAT_DEFAULT) {
        format = AUDIO_FORMAT_PCM_16_BIT;
    }

    // validate parameters
    if (!audio_is_valid_format(format)) {
        ALOGE("Invalid format %#x", format);
        return BAD_VALUE;
    }
    mFormat = format;

    //输出声道合法性检查
    if (!audio_is_output_channel(channelMask)) {
        ALOGE("Invalid channel mask %#x", channelMask);
        return BAD_VALUE;
    }
    mChannelMask = channelMask;
    uint32_t channelCount = audio_channel_count_from_out_mask(channelMask);
    mChannelCount = channelCount;
    //...

    //根据音频流类型从AudioPolicyService中得到对应的音频采样率
    if (sampleRate == 0 && (flags & AUDIO_OUTPUT_FLAG_DIRECT) != 0) {
        return BAD_VALUE;
    }
    mSampleRate = sampleRate;

    if (offloadInfo != NULL) {
        mOffloadInfoCopy = *offloadInfo;
        mOffloadInfo = &mOffloadInfoCopy;
    } else {
        mOffloadInfo = NULL;
    }
    
    //左右声道初始音量都设置成最大
    mVolume[AUDIO_INTERLEAVE_LEFT] = 1.0f;
    mVolume[AUDIO_INTERLEAVE_RIGHT] = 1.0f;
    mSendLevel = 0.0f;
    // mFrameCount is initialized in createTrack_l
    mReqFrameCount = frameCount;
    mNotificationFramesReq = notificationFrames;
    mNotificationFramesAct = 0;
    if (sessionId == AUDIO_SESSION_ALLOCATE) {
        mSessionId = AudioSystem::newAudioUniqueId();
    } else {
        mSessionId = sessionId;
    }
    int callingpid = IPCThreadState::self()->getCallingPid();
    int mypid = getpid();
    if (uid == -1 || (callingpid != mypid)) {
        mClientUid = IPCThreadState::self()->getCallingUid();
    } else {
        mClientUid = uid;
    }
    if (pid == -1 || (callingpid != mypid)) {
        mClientPid = callingpid;
    } else {
        mClientPid = pid;
    }
    mAuxEffectId = 0;
    mFlags = flags;
    mCbf = cbf;
    //如果设置了提供音频数据的回调函数,则启动AudioTrackThread线程来提供音频数据
    if (cbf != NULL) {
        /*AudioTrackThread实现两个核心功能:
         *1 AudioTrack与AudioFlinger之间 数据传输,AudioFlinger启动了一个线程专门用于接收客户端的
         *  音频数据,同时客户端也需要一个线程来“不断”的传送音频数据
         *2 用于报告数据的传输状态,AudioTrack中保存了一个callback_t类型的回调函数(即全局变量mCbf)
         *  用于事件发生时进行回传
         */
        mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);
        //运行线程
        mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);
    }

    //关键点:create the IAudioTrack
    status_t status = createTrack_l();

    if (status != NO_ERROR) {
        if (mAudioTrackThread != 0) {
            mAudioTrackThread->requestExit();   // see comment in AudioTrack.h
            mAudioTrackThread->requestExitAndWait();
            mAudioTrackThread.clear();
        }
        return status;
    }
    //...
    return NO_ERROR;
}

这里专注于分析createTrack_l方法,代码实现如下:

status_t AudioTrack::createTrack_l()
{
    const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();
    //...
    /*audio_io_handle_t是一个通过typedef定义在audio.h中的int类型,这个值主要被AudioFlinger使用
     *用来表示内部的工作线程的索引,AudioFlinger会根据情况创建几个工作线程
     *AudioSystem::getOutputForAttr会根据流类型等参数选取一个合适的工作线程
     *并将它在AF中的索引号保存在output变量中,AudioTrack一般使用混音线程(MixerThread)
     */
    audio_io_handle_t output;
    audio_stream_type_t streamType = mStreamType;
    audio_attributes_t *attr = (mStreamType == AUDIO_STREAM_DEFAULT) ? &mAttributes : NULL;
    //关键点1:通过属性获取output。
    status_t status = AudioSystem::getOutputForAttr(attr, &output,
                                                    (audio_session_t)mSessionId, &streamType,
                                                    mSampleRate, mFormat, mChannelMask,
                                                    mFlags, mOffloadInfo);
    //...
    uint32_t afLatency;
    status = AudioSystem::getLatency(output, &afLatency);
    if (status != NO_ERROR) {
        ALOGE("getLatency(%d) failed status %d", output, status);
        goto release;
    }

    size_t afFrameCount;
    status = AudioSystem::getFrameCount(output, &afFrameCount);
    if (status != NO_ERROR) {
        ALOGE("getFrameCount(output=%d) status %d", output, status);
        goto release;
    }

    uint32_t afSampleRate;
    status = AudioSystem::getSamplingRate(output, &afSampleRate);
    if (status != NO_ERROR) {
        ALOGE("getSamplingRate(output=%d) status %d", output, status);
        goto release;
    }
    if (mSampleRate == 0) {
        mSampleRate = afSampleRate;
    }
    //...
    size_t temp = frameCount;   // temp may be replaced by a revised value of frameCount,
                                // but we will still need the original value also
    /*关键点2: 创建Track,这里返回AudioFlinger内部的AudioTrack的binder代理Track
     *它是联系AudioTrack和AudioFlinger的关键纽带
     */
    sp<IAudioTrack> track = audioFlinger->createTrack(streamType,
                                                      mSampleRate,
                                                      //...
                                                      mClientUid,
                                                      &status);
    //...
    /*获取 track变量 的共享内存 buffer
     *当PlaybackThread创建一个PlaybackThread::Track对象时,所需的缓冲区空间
     *就已经分配了,这块空间是可以跨进程共享的,所以AudioTrack可以通过track->getCblk
     *来获取共享内存了。
     */
    sp<IMemory> iMem = track->getCblk();//向AudioFlinger申请数据缓冲空间
    if (iMem == 0) {
        ALOGE("Could not get control block");
        return NO_INIT;
    }
    void *iMemPointer = iMem->pointer();
    if (iMemPointer == NULL) {
        ALOGE("Could not get control block pointer");
        return NO_INIT;
    }
    // invariant that mAudioTrack != 0 is true only after set() returns successfully
    if (mAudioTrack != 0) {
        mAudioTrack->asBinder()->unlinkToDeath(mDeathNotifier, this);
        mDeathNotifier.clear();
    }

    //与AudioFlinger进行通信的中介
    mAudioTrack = track;
    mCblkMemory = iMem;
    IPCThreadState::self()->flushCommands();

    audio_track_cblk_t* cblk = static_cast<audio_track_cblk_t*>(iMemPointer);
    mCblk = cblk;
    //...
    // update proxy,APP的AudioTack 与Thread中的Track建立共享内存
    // Proxy类封装了 track 共享 buffer 的操控接口,实现共享 buffer 的使用
    if (mSharedBuffer == 0) {
        mStaticProxy.clear();
        mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
    } else {
        mStaticProxy = new StaticAudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
        mProxy = mStaticProxy;
    }
    //调用 Proxy 的 set 接口,设置保存 VolumeLR,SampleRate,SendLevel 等参数,
    //AudioFlinger mixer 线程中会把这些参数取出来实现混音
    mProxy->setVolumeLR(gain_minifloat_pack(
            gain_from_float(mVolume[AUDIO_INTERLEAVE_LEFT]),
            gain_from_float(mVolume[AUDIO_INTERLEAVE_RIGHT])));

    mProxy->setSendLevel(mSendLevel);
    mProxy->setSampleRate(mSampleRate);
    mProxy->setMinimum(mNotificationFramesAct);

    mDeathNotifier = new DeathNotifier(this);
    mAudioTrack->asBinder()->linkToDeath(mDeathNotifier, this);

    return NO_ERROR;
    }

release:
    AudioSystem::releaseOutput(output, streamType, (audio_session_t)mSessionId);
    if (status == NO_ERROR) {
        status = NO_INIT;
    }
    return status;
}

2.1 通过Attr选择output

这里专注分析 AudioSystem::getOutputForAttr的实现,代码如下:

status_t AudioSystem::getOutputForAttr(const audio_attributes_t *attr,
                                        audio_io_handle_t *output,
                                        audio_session_t session,
                                        audio_stream_type_t *stream,
                                        uint32_t samplingRate,
                                        audio_format_t format,
                                        audio_channel_mask_t channelMask,
                                        audio_output_flags_t flags,
                                        const audio_offload_info_t *offloadInfo)
{
    const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
    if (aps == 0) return NO_INIT;
    return aps->getOutputForAttr(attr, output, session, stream,
                                 samplingRate, format, channelMask,
                                 flags, offloadInf_o);
}

这里主要调用了AudioPolicyManager的getOutputForAttr,代码实现如下:

status_t AudioPolicyManager::getOutputForAttr(const audio_attributes_t *attr,
                                              audio_io_handle_t *output,
                                              audio_session_t session,
                                              audio_stream_type_t *stream,
                                              uint32_t samplingRate,
                                              audio_format_t format,
                                              audio_channel_mask_t channelMask,
                                              audio_output_flags_t flags,
                                              const audio_offload_info_t *offloadInfo)
{
    //...
    //这里通过Attr声音属性 获取strategy类别
    routing_strategy strategy = (routing_strategy) getStrategyForAttr(&attributes);
    //这里通过strategy类别,确定播放设备(耳机/蓝牙/喇叭)
    audio_devices_t device = getDeviceForStrategy(strategy, false /*fromCache*/);

    if ((attributes.flags & AUDIO_FLAG_HW_AV_SYNC) != 0) {
        flags = (audio_output_flags_t)(flags | AUDIO_OUTPUT_FLAG_HW_AV_SYNC);
    }
    //根据属性信息,获取流
    *stream = streamTypefromAttributesInt(&attributes);
    //根据设备device,去查找哪些output上有这些device,然后从output中找到最合适的那个
    *output = getOutputForDevice(device, session, *stream,
                                 samplingRate, format, channelMask,
                                 flags, offloadInfo);
    if (*output == AUDIO_IO_HANDLE_NONE) {
        return INVALID_OPERATION;
    }
    return NO_ERROR;
}

总结下 从 APP构造AudioTrack 到 选择output的流程:

  1. APP构造AudioTrack时指定了 stream type
  2. AudioTrack::setAttributesFromStreamType
  3. AudioPolicyManager::getStrategyForAttr
  4. AudioPolicyManager::getDeviceForStrategy
  5. AudioPolicyManager::getOutputForDevice->AudioPolicyManager::getOutputsForDevice->output = selectOutput(outputs, flags, format);

这里对getOutputForDevice的逻辑进行简单的概述:因为有可能多个output都支持某个device,怎么从多个output中取出最合适的AudioPolicyManager::selectOutput?

  1. 首先:APP创建AudioTrack时会传递flag,output对应的profile(audio_policy.conf)中也有flag;使用这两类flag进行比较,取出吻合度最高的output。
  2. 其次:如果前面的flag比较后有重复,则继续比较这批吻合度最高的output们 ;这些output中如果primary output支持该设备,则选择它。
  3. 最后:还选不到最合适的,则选择当前的第一个。

2.2 根据output找到对应的playbackThread

@1 createTrack分析

通过output找到对应的playbackThread,在playbackThread中创建对应的track,createTrack的代码实现如下:

sp<IAudioTrack> AudioFlinger::createTrack(
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        //...
        int clientUid,
        status_t *status)
{
    sp<PlaybackThread::Track> track;
    sp<TrackHandle> trackHandle;
    sp<Client> client;
    status_t lStatus;
    int lSessionId;
    //...
    {
        Mutex::Autolock _l(mLock);
        //关键点1:根据output确定PlaybackThread
        PlaybackThread *thread = checkPlaybackThread_l(output);
        if (thread == NULL) {
            ALOGE("no playback thread found for output handle %d", output);
            lStatus = BAD_VALUE;
            goto Exit;
        }

        pid_t pid = IPCThreadState::self()->getCallingPid();
        client = registerPid(pid);
        //...
        //关键点2:根据线程来执行createTrack_l来创建一个track
        track = thread->createTrack_l(client, streamType, sampleRate, format,
                channelMask, frameCount, sharedBuffer, lSessionId, flags, tid, clientUid, &lStatus);
        //...
    }
    //...
    // return handle to client
    trackHandle = new TrackHandle(track);
Exit:
    *status = lStatus;
    return trackHandle;
}

专注分析checkPlaybackThread_l,代码实现如下:

AudioFlinger::PlaybackThread *AudioFlinger::checkPlaybackThread_l(audio_io_handle_t output) const
{
    return mPlaybackThreads.valueFor(output).get();
}

这里就是通过output来获取对应的PlaybackThread类型的线程。

@2 createTrack_l分析

接下来 专注分析createTrack_l,代码实现如下:

sp<AudioFlinger::PlaybackThread::Track> AudioFlinger::PlaybackThread::createTrack_l(
        const sp<AudioFlinger::Client>& client,
        //...
        int uid,
        status_t *status)
{
    size_t frameCount = *pFrameCount;
    sp<Track> track;
    status_t lStatus;

    bool isTimed = (*flags & IAudioFlinger::TRACK_TIMED) != 0;

    //...
        if (!isTimed) {
            //创建新的Track
            track = new Track(this, client, streamType, sampleRate, format,
                              channelMask, frameCount, NULL, sharedBuffer,
                              sessionId, uid, *flags, TrackBase::TYPE_DEFAULT);
        } else {
            track = TimedTrack::create(this, client, streamType, sampleRate, format,
                    channelMask, frameCount, sharedBuffer, sessionId, uid);
        }

        // new Track always returns non-NULL,
        // but TimedTrack::create() is a factory that could fail by returning NULL
        lStatus = track != 0 ? track->initCheck() : (status_t) NO_MEMORY;
        if (lStatus != NO_ERROR) {
            ALOGE("createTrack_l() initCheck failed %d; no control block?", lStatus);
            // track must be cleared from the caller as the caller has the AF lock
            goto Exit;
        }
        //将track加入到PlayBackThread的mTracks表中。
        mTracks.add(track);
    //...
    lStatus = NO_ERROR;

Exit:
    *status = lStatus;
    return track;
}

这里主要是 playbackthread创建新的track并把track加入到自己的mTracks表中。

发布了289 篇原创文章 · 获赞 47 · 访问量 3万+

猜你喜欢

转载自blog.csdn.net/vviccc/article/details/105310881