Android Framework 音频子系统(03)AudioPolicyService启动分析

该系列文章总纲链接:专题分纲目录 Android Framework 音频子系统​​​​​​​


本章关键点总结 & 说明:

本章节主要关注➕ 以上思维导图下边 AudioPolicyService 部分 即可。主要是对AudioPolicyService启动的分析,涉及从AudioPolicyService启动流程 到 AudioPolicyManager流程和AudioPolicyManager详细分析流程(加载配置文件、loadHwModule
、openOutput,这里涉及的AudioFlinger相关操作在下一节中详细分析其流程)。


1 从AudioPolicyService启动流程 到 AudioPolicyManager

AudioPolicyService的启动是在Main_mediaserver.cpp中实现,代码如下:

int main(int argc __unused, char** argv)
{
    signal(SIGPIPE, SIG_IGN);
    char value[PROPERTY_VALUE_MAX];
    //...
    sp<ProcessState> proc(ProcessState::self());
    sp<IServiceManager> sm = defaultServiceManager();
    //...
    AudioFlinger::instantiate();//AudioFlinger
    MediaPlayerService::instantiate();
    CameraService::instantiate();
    AudioPolicyService::instantiate();//AudioPolicyService
    SoundTriggerHwService::instantiate();
    registerExtensions();
    ProcessState::self()->startThreadPool();
    IPCThreadState::self()->joinThreadPool();
	//...
}

这里AudioFlinger服务先启动,之后AudioPolicyService启动就可以直接使用AudioFlinger服务了。本章节专注分析AudioPolicyService::instantiate()的实现,代码如下:

static void instantiate() { publish(); }

继续分析publish,代码如下:

    static status_t publish(bool allowIsolated = false) {
        sp<IServiceManager> sm(defaultServiceManager());
        return sm->addService(//注册服务到servicemanager中
                String16(SERVICE::getServiceName()),
                new SERVICE(), allowIsolated);
    }

这里将服务注册到service_manager 中。同时这里会 创建 一个AudioPolicyService,即执行AudioPolicyService的构造器,代码如下:

AudioPolicyService::AudioPolicyService()
    : BnAudioPolicyService(), mpAudioPolicyDev(NULL), mpAudioPolicy(NULL),
      mAudioPolicyManager(NULL), mAudioPolicyClient(NULL), mPhoneState(AUDIO_MODE_INVALID)
{
}

主要是一些变量的初始化,之后主要看onFirstRef的实现,代码如下:

void AudioPolicyService::onFirstRef()
{
    char value[PROPERTY_VALUE_MAX];
    const struct hw_module_t *module;
    int forced_val;
    int rc;
    {
        Mutex::Autolock _l(mLock);
        //创建3个线程:
        //用于播放tone音
        mTonePlaybackThread = new AudioCommandThread(String8("ApmTone"), this);
        //用于执行audio命令
        mAudioCommandThread = new AudioCommandThread(String8("ApmAudio"), this);
        //用于执行输出命令
        mOutputCommandThread = new AudioCommandThread(String8("ApmOutput"), this);
#ifdef USE_LEGACY_AUDIO_POLICY
        //老版本的实现,这里忽略
#else
        //AudioFlinger客户端实现,调用AudioFlinger的一些服务
        mAudioPolicyClient = new AudioPolicyClient(this);
        mAudioPolicyManager = createAudioPolicyManager(mAudioPolicyClient);
#endif
    }
    // load audio processing modules
    sp<AudioPolicyEffects>audioPolicyEffects = new AudioPolicyEffects();
    {
        Mutex::Autolock _l(mLock);
        mAudioPolicyEffects = audioPolicyEffects;
    }
}

这里主要关注 createAudioPolicyManager的实现,代码如下:

extern "C" AudioPolicyInterface* createAudioPolicyManager(
        AudioPolicyClientInterface *clientInterface)
{
    return new AudioPolicyManager(clientInterface);
}

接下里继续分析AudioPolicyManager的实现。


2 AudioPolicyManager详细分析

AudioPolicyManager的代码实现如下:

AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface)
    ://test...
    mPrimaryOutput((audio_io_handle_t)0),
    mPhoneState(AUDIO_MODE_NORMAL),
    mLimitRingtoneVolume(false), mLastVoiceVolume(-1.0f),
    //...
{
    //...
    mDefaultOutputDevice = new DeviceDescriptor(String8(""), AUDIO_DEVICE_OUT_SPEAKER);
    //关键点1,加载配置文件 audio_policy.conf
    /*
     系统会首先加载vendor/etc目录下的configure文件,再加载system/etc目录下的configure文件。
     若这两者加载都发生错误的话,系统会加载default配置文件,并命名为primary module,从这可以看
     出,音频系统中一定必须存在的module就是primary了。
     */
    if (loadAudioPolicyConfig(AUDIO_POLICY_VENDOR_CONFIG_FILE) != NO_ERROR) {
        if (loadAudioPolicyConfig(AUDIO_POLICY_CONFIG_FILE) != NO_ERROR) {
            ALOGE("could not load audio policy configuration file, setting defaults");
            defaultAudioPolicyConfig();
        }
    }
    // mAvailableOutputDevices and mAvailableInputDevices now contain all attached devices

    //初始化各种音频流对应的音量调节点
    initializeVolumeCurves();

    // open all output streams needed to access attached devices
    audio_devices_t outputDeviceTypes = mAvailableOutputDevices.types();
    //input等价操作
    for (size_t i = 0; i < mHwModules.size(); i++) {
        //关键点2,使用AudioFlinger加载audio policy硬件抽象库
        mHwModules[i]->mHandle = mpClientInterface->loadHwModule(mHwModules[i]->mName);
        if (mHwModules[i]->mHandle == 0) {
            ALOGW("could not open HW module %s", mHwModules[i]->mName);
            continue;
        }
        
        for (size_t j = 0; j < mHwModules[i]->mOutputProfiles.size(); j++)
        {
            const sp<IOProfile> outProfile = mHwModules[i]->mOutputProfiles[j];

            if (outProfile->mSupportedDevices.isEmpty()) {
                ALOGW("Output profile contains no device on module %s", mHwModules[i]->mName);
                continue;
            }

            if ((outProfile->mFlags & AUDIO_OUTPUT_FLAG_DIRECT) != 0) {
                continue;
            }
            audio_devices_t profileType = outProfile->mSupportedDevices.types();
            if ((profileType & mDefaultOutputDevice->mDeviceType) != AUDIO_DEVICE_NONE) {
                profileType = mDefaultOutputDevice->mDeviceType;
            } else {
                for (size_t k = 0; k  < outProfile->mSupportedDevices.size(); k++) {
                    profileType = outProfile->mSupportedDevices[k]->mDeviceType;
                    if ((profileType & outputDeviceTypes) != 0) {
                        break;
                    }
                }
            }
            if ((profileType & outputDeviceTypes) == 0) {
                continue;
            }
            //通过outProfile 构建AudioOutputDescriptor类型的变量outputDesc
            sp<AudioOutputDescriptor> outputDesc = new AudioOutputDescriptor(outProfile);

            outputDesc->mDevice = profileType;
            audio_config_t config = AUDIO_CONFIG_INITIALIZER;
            config.sample_rate = outputDesc->mSamplingRate;
            config.channel_mask = outputDesc->mChannelMask;
            config.format = outputDesc->mFormat;
            audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;
            //关键点3:openOutput
            status_t status = mpClientInterface->openOutput(outProfile->mModule->mHandle,
                                                            &output,
                                                            &config,
                                                            &outputDesc->mDevice,
                                                            String8(""),
                                                            &outputDesc->mLatency,
                                                            outputDesc->mFlags);
            if (status != NO_ERROR) {
                //...
            } else {
                outputDesc->mSamplingRate = config.sample_rate;
                outputDesc->mChannelMask = config.channel_mask;
                outputDesc->mFormat = config.format;
                //...
                /*保存输出设备描述符对象outputDesc到mOutputs,表示已经打开的Output
                 *这样以后就可以根据一个整数output 找到对应的thread和outputDesc
                 */
                addOutput(output, outputDesc);
                //设置输出设备
                setOutputDevice(output,
                                outputDesc->mDevice,
                                true);
            }
        }
        // open input streams needed to access attached devices to validate
        // mAvailableInputDevices list
        //input等价操作
    }
    // make sure all attached devices have been allocated a unique ID
    for (size_t i = 0; i  < mAvailableOutputDevices.size();) {
        if (mAvailableOutputDevices[i]->mId == 0) {
            ALOGW("Input device %08x unreachable", mAvailableOutputDevices[i]->mDeviceType);
            mAvailableOutputDevices.remove(mAvailableOutputDevices[i]);
            continue;
        }
        i++;
    }
    //更新输出设备
    updateDevicesAndOutputs();
    //test...
}

这里主要分析 3个关键点:加载配置文件 和 loadHwModule和打开output。

2.1 加载配置文件

loadAudioPolicyConfig的代码实现如下:

status_t AudioPolicyManager::loadAudioPolicyConfig(const char *path)
{
    cnode *root;
    char *data;

    data = (char *)load_file(path, NULL);
    if (data == NULL) {
        return -ENODEV;
    }
    root = config_node("", "");
    config_load(root, data);

    loadHwModules(root);
    // legacy audio_policy.conf files have one global_configuration section
    loadGlobalConfig(root, getModuleFromName(AUDIO_HARDWARE_MODULE_ID_PRIMARY));
    config_free(root);
    free(root);
    free(data);
    return NO_ERROR;
}

这里本质上是解析一个文件audio_policy.conf,这个文件的构造如下:

# Global configuration section: lists input and output devices always present on the device
# as well as the output device selected by default.
# Devices are designated by a string that corresponds to the enum in audio.h

global_configuration {
  attached_output_devices AUDIO_DEVICE_OUT_SPEAKER
  default_output_device AUDIO_DEVICE_OUT_SPEAKER
  attached_input_devices AUDIO_DEVICE_IN_BUILTIN_MIC|AUDIO_DEVICE_IN_VOICE_CALL|AUDIO_DEVICE_IN_REMOTE_SUBMIX
}

# audio hardware module section: contains descriptors for all audio hw modules present on the
# device. Each hw module node is named after the corresponding hw module library base name.
# For instance, "primary" corresponds to audio.primary.<device>.so.
# The "primary" module is mandatory and must include at least one output with
# AUDIO_OUTPUT_FLAG_PRIMARY flag.
# Each module descriptor contains one or more output profile descriptors and zero or more
# input profile descriptors. Each profile lists all the parameters supported by a given output
# or input stream category.
# The "channel_masks", "formats", "devices" and "flags" are specified using strings corresponding
# to enums in audio.h and audio_policy.h. They are concatenated by use of "|" without space or "\n".

audio_hw_modules {
  primary { #一个module对应厂家提供一个so文件
    outputs { #一个module可以有多个output
      primary { #一个module里表明它的参数
        sampling_rates 48000
        channel_masks AUDIO_CHANNEL_OUT_STEREO
        formats AUDIO_FORMAT_PCM_16_BIT
        devices AUDIO_DEVICE_OUT_SPEAKER|AUDIO_DEVICE_OUT_WIRED_HEADSET|AUDIO_DEVICE_OUT_WIRED_HEADPHONE|AUDIO_DEVICE_OUT_ALL_SCO
        flags AUDIO_OUTPUT_FLAG_PRIMARY #默认设备
      }
    }
    inputs { #一个module可以有多个input
      primary {
        sampling_rates 8000|11025|12000|16000|22050|24000|32000|44100|48000
        channel_masks AUDIO_CHANNEL_IN_MONO|AUDIO_CHANNEL_IN_STEREO
        formats AUDIO_FORMAT_PCM_16_BIT
        devices AUDIO_DEVICE_IN_BUILTIN_MIC|AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET|AUDIO_DEVICE_IN_WIRED_HEADSET|AUDIO_DEVICE_IN_VOICE_CALL
      }
    }
  }
  a2dp {
    outputs {
      a2dp {
        sampling_rates 44100
        channel_masks AUDIO_CHANNEL_OUT_STEREO
        formats AUDIO_FORMAT_PCM_16_BIT
        devices AUDIO_DEVICE_OUT_ALL_A2DP
      }
    }
  }
  //...
}

总结下,这部分代码主要是加载解析/vendor/etc/audio_policy.conf或/system/etc/audio_policy.conf,主要分为3个步骤:

  1. 对于配置文件里的每一个module项, new HwModule(name), 放入mHwModules数组
  2. 对于module里的每一个output, new IOProfile, 放入module的mOutputProfiles
  3. 对于module里的每一个input, new IOProfile, 放入module的mInputProfiles

这个文件解析用一张图来表示,这样可以更清楚的看到层级关系与数据结构的关系,如下所示:

2.2 loadHwModule的实现

mpClientInterface->loadHwModule(mHwModules[i]->mName),最后调用的是AudioFlinger的loadHwModule方法,它的代码实现如下:

audio_module_handle_t AudioPolicyService::AudioPolicyClient::loadHwModule(const char *name)
{
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
    if (af == 0) {
        ALOGW("%s: could not get AudioFlinger", __func__);
        return 0;
    }

    return af->loadHwModule(name);
}

这里最后调用了AudioFlinger的loadHwModule方法。下一章节我们针对此进行详细分析

2.3 openOutput的实现

mpClientInterface->openOutput,最后调用的是AudioFlinger的openOutput方法,它的代码实现如下:

status_t AudioPolicyService::AudioPolicyClient::openOutput(audio_module_handle_t module,
                                                           //...
                                                           audio_output_flags_t flags)
{
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
    if (af == 0) {
        ALOGW("%s: could not get AudioFlinger", __func__);
        return PERMISSION_DENIED;
    }
    return af->openOutput(module, output, config, devices, address, latencyMs, flags);
}

这里最后调用了AudioFlinger的output  (这里最后会创建一个MixerThread线程与对应的output相关联,之后应用程序就可以把数据传递给线程了,进而传递给硬件设备了)。下一章节我们针对此进行详细分析。

发布了289 篇原创文章 · 获赞 47 · 访问量 3万+

猜你喜欢

转载自blog.csdn.net/vviccc/article/details/105275077