android 音频总结1 ----mark

Android音频系统详解

参考好文:

Android 音频系统:从 AudioTrack 到 AudioFlinger

https://blog.csdn.net/zyuanyun/article/details/60890534

Android系统Audio框架介绍

http://blog.csdn.net/yangwen123/article/details/39502689

4.1 分析思路

a. Thread如何创建?

   AudioPolicyService是策略的制定者,

   AudioFlinger是策略的执行者,

   所以: AudioPolicyService根据配置文件使唤AudioFlinger来创建Thread

b. Thread对应output, output对应哪些设备节点?

c. AudioTrack和Track的创建过程: AudioTrack对应哪一个Thread, 对应哪一个output?

d. AudioTrack如何传输数据给Thread?

   AudioTrack如何播放、暂停、关闭?

使用HardWare Module 来操作硬件

hw mododule 的名字是什么? 哪些so文件

module支持哪些output?

output支持哪些device,参数是啥?

需要设置/system/etc/audio_policy.conf

4.2 以例子说明几个重要概念

stream type, strategy, device, output, profile, module  : policy

out flag : 比如对于某个专业APP, 它只从HDMI播放声音, 这时就可以指定out flag为AUDIO_OUTPUT_FLAG_DIRECT,这会导致最终的声音无需混音即直接输出到对应的device

Android系统里使用hardware module来访问硬件, 比如声卡

声卡上有喇叭、耳机等等,称为device

为了便于管理, 把一个设备上具有相同参数的一组device称为output,

一个module能支持哪些output,一个output能支持哪些device,使用配置文件/system/etc/audio_policy.conf来描述

app要播放声音, 要指定声音类型: stream type

有那么多的类型, 来来来, 先看它属于哪一类(策略): strategy

根据strategy确定要用什么设备播放: device, 喇叭、耳机还是蓝牙?

根据device确定output, 进而知道对应的playbackthread,

把声音数据传给这个thread

一个stream如何最终选择到一个device,

这些stream如何互相影响(一个高优先级的声音会使得其他声音静音),

等等等, 统称为policy (政策)

输出、输入设备:

https://blog.csdn.net/zzqhost/article/details/7711935

概念:

 module: 硬件操作库, 用来操作device

output: 一组有相同参数的,来自同一硬件的device

device: 喇叭,耳机,....

需要通过设置/system/etc/audio_policy.conf

profile : 配置,用来描述output 

    a. 本可以支持哪些设备

    b. 参数: 采样率,通道

output: 

    a. 现在可以,实际上可以支持的device

    b. 参数

profile: 可以支持喇叭,耳机

output: 接上耳机时,才可以支持耳机

APP: 播放音乐

问;  播放音乐时,有哪么多的路径, App怎么办?

答: App不用管,只用表明声音类型(stream type)

App要指定声音类型(stream type): 太多了,分个组 strategy(具有相同行为的stream)

    播放的device相同,播放的优先级相同

4.3 所涉及文件形象讲解

系统服务APP:

frameworks/av/media/mediaserver/main_mediaserver.cpp

AudioFlinger :

AudioFlinger.cpp  (frameworks/av/services/audioflinger/AudioFlinger.cpp)

Threads.cpp       (frameworks/av/services/audioflinger/Threads.cpp)

Tracks.cpp        (frameworks/av/services/audioflinger/Tracks.cpp)

audio_hw_hal.cpp  (hardware/libhardware_legacy/audio/Audio_hw_hal.cpp)

AudioHardware.cpp (device/friendly-arm/common/libaudio/AudioHardware.cpp)

AudioPolicyService:

AudioPolicyService.cpp    (frameworks/av/services/audiopolicy/AudioPolicyService.cpp)

AudioPolicyClientImpl.cpp (frameworks/av/services/audiopolicy/AudioPolicyClientImpl.cpp)

AudioPolicyInterfaceImpl.cpp(frameworks/av/services/audiopolicy/AudioPolicyInterfaceImpl.cpp)

AudioPolicyManager.cpp (device/friendly-arm/common/libaudio/AudioPolicyManager.cpp)

AudioPolicyManager.h   (device/friendly-arm/common/libaudio/AudioPolicyManager.h)

AudioPolicyManagerBase.cpp (hardware/libhardware_legacy/audio/AudioPolicyManagerBase.cpp)

堪误: 上面3个文件被以下文件替代

AudioPolicyManager.cpp (frameworks/av/services/audiopolicy/AudioPolicyManager.cpp)

应用程序APP所用文件:

AudioTrack.java (frameworks/base/media/java/android/media/AudioTrack.java)

android_media_AudioTrack.cpp (frameworks/base/core/jni/android_media_AudioTrack.cpp)

AudioTrack.cpp  (frameworks/av/media/libmedia/AudioTrack.cpp)

AudioSystem.cpp (frameworks/av/media/libmedia/AudioSystem.cpp)

音频文件框架图:

4.4 AudioPolicyService启动过程分析

a. 加载解析/vendor/etc/audio_policy.conf或/system/etc/audio_policy.conf

   对于配置文件里的每一个module项, new HwModule(name), 放入mHwModules数组

   对于module里的每一个output, new IOProfile, 放入module的mOutputProfiles

   对于module里的每一个input, new IOProfile, 放入module的mInputProfiles

b. 根据module的name加载厂家提供的so文件 (通过AudioFlinger来加载)

c. 打开对应的output                     (通过AudioFlinger来open output)

问: 默认声卡是? 声卡/有耳机孔/喇叭,如何告知Andrdoi系统?

由厂家决定,用一个配置文件申明

AndroidPolicyService:

a. 读取,解析配置文件

b. AndroidPolicyService根据配置文件,调用AudioFlinger来打开output,创建线程

总结: 对于audio_policy,conf 里的每一个module:   

使用loadHwModule来处理

a. new HwModule(名字"primary")

b. mOutputProfiles: 每一项对应于output的profile

c. mInputProfiles: 每一项对应一个Input的prifile

AudioPolicyManagerBase.cpp (z:\android-5.0.2\hardware\libhardware_legacy\audio)   

// ----------------------------------------------------------------------------

// AudioPolicyManagerBase

// ----------------------------------------------------------------------------

AudioPolicyManagerBase::AudioPolicyManagerBase(AudioPolicyClientInterface *clientInterface)

    :

#ifdef AUDIO_POLICY_TEST

    Thread(false),

#endif //AUDIO_POLICY_TEST

    mPrimaryOutput((audio_io_handle_t)0),

    mAvailableOutputDevices(AUDIO_DEVICE_NONE),

    mPhoneState(AudioSystem::MODE_NORMAL),

    mLimitRingtoneVolume(false), mLastVoiceVolume(-1.0f),

    mTotalEffectsCpuLoad(0), mTotalEffectsMemory(0),

    mA2dpSuspended(false), mHasA2dp(false), mHasUsb(false), mHasRemoteSubmix(false),

    mSpeakerDrcEnabled(false)

{

    mpClientInterface = clientInterface;

    for (int i = 0; i < AudioSystem::NUM_FORCE_USE; i++) {

        mForceUse[i] = AudioSystem::FORCE_NONE;

    }

    mA2dpDeviceAddress = String8("");

    mScoDeviceAddress = String8("");

    mUsbOutCardAndDevice = String8("");

    if (loadAudioPolicyConfig(AUDIO_POLICY_VENDOR_CONFIG_FILE) != NO_ERROR) {

        if (loadAudioPolicyConfig(AUDIO_POLICY_CONFIG_FILE) != NO_ERROR) {

            ALOGE("could not load audio policy configuration file, setting defaults");

            defaultAudioPolicyConfig();

        }

    }

    // must be done after reading the policy

    initializeVolumeCurves();

    // open all output streams needed to access attached devices

    for (size_t i = 0; i < mHwModules.size(); i++) {

// 找到frameworks\av\services\audioflinger\AudioFlinger.cpp\loadHwModule

        mHwModules[i]->mHandle = mpClientInterface->loadHwModule(mHwModules[i]->mName);

        if (mHwModules[i]->mHandle == 0) {

            ALOGW("could not open HW module %s", mHwModules[i]->mName);

            continue;

        }

        // open all output streams needed to access attached devices

        // except for direct output streams that are only opened when they are actually

        // required by an app.

        for (size_t j = 0; j < mHwModules[i]->mOutputProfiles.size(); j++)

        {

            const IOProfile *outProfile = mHwModules[i]->mOutputProfiles[j];

            if ((outProfile->mSupportedDevices & mAttachedOutputDevices) &&

                    ((outProfile->mFlags & AUDIO_OUTPUT_FLAG_DIRECT) == 0)) {

                AudioOutputDescriptor *outputDesc = new AudioOutputDescriptor(outProfile);

                outputDesc->mDevice = (audio_devices_t)(mDefaultOutputDevice &

                                                            outProfile->mSupportedDevices);

                audio_io_handle_t output = mpClientInterface->openOutput(

                                                outProfile->mModule->mHandle,

                                                &outputDesc->mDevice,

                                                &outputDesc->mSamplingRate,

                                                &outputDesc->mFormat,

                                                &outputDesc->mChannelMask,

                                                &outputDesc->mLatency,

                                                outputDesc->mFlags);

                if (output == 0) {

                    delete outputDesc;

                } else {

                    mAvailableOutputDevices = (audio_devices_t)(mAvailableOutputDevices |

                                            (outProfile->mSupportedDevices & mAttachedOutputDevices));

                    if (mPrimaryOutput == 0 &&

                            outProfile->mFlags & AUDIO_OUTPUT_FLAG_PRIMARY) {

                        mPrimaryOutput = output;

                    }

                    addOutput(output, outputDesc);

                    setOutputDevice(output,

                                    (audio_devices_t)(mDefaultOutputDevice &

                                                        outProfile->mSupportedDevices),

                                    true);

                }

            }

        }

    }

    ALOGE_IF((mAttachedOutputDevices & ~mAvailableOutputDevices),

             "Not output found for attached devices %08x",

             (mAttachedOutputDevices & ~mAvailableOutputDevices));

    ALOGE_IF((mPrimaryOutput == 0), "Failed to open primary output");

    updateDevicesAndOutputs();

#ifdef AUDIO_POLICY_TEST

    if (mPrimaryOutput != 0) {

        AudioParameter outputCmd = AudioParameter();

        outputCmd.addInt(String8("set_id"), 0);

        mpClientInterface->setParameters(mPrimaryOutput, outputCmd.toString());

        mTestDevice = AUDIO_DEVICE_OUT_SPEAKER;

        mTestSamplingRate = 44100;

        mTestFormat = AudioSystem::PCM_16_BIT;

        mTestChannels =  AudioSystem::CHANNEL_OUT_STEREO;

        mTestLatencyMs = 0;

        mCurOutput = 0;

        mDirectOutput = false;

        for (int i = 0; i < NUM_TEST_OUTPUTS; i++) {

            mTestOutputs[i] = 0;

        }

        const size_t SIZE = 256;

        char buffer[SIZE];

        snprintf(buffer, SIZE, "AudioPolicyManagerTest");

        run(buffer, ANDROID_PRIORITY_AUDIO);

    }

#endif //AUDIO_POLICY_TEST

}

Audio_policy_conf.h (z:\android-5.0.2\hardware\libhardware_legacy\include\hardware_legacy)  

#define AUDIO_POLICY_CONFIG_FILE "/system/etc/audio_policy.conf"

#define AUDIO_POLICY_VENDOR_CONFIG_FILE "/vendor/etc/audio_policy.conf"

查找上面目录:

/vendor/etc/audio_policy.conf

#

# Audio policy configuration for generic device builds (goldfish audio HAL - emulator)

#

# Global configuration section: lists input and output devices always present on the device

# as well as the output device selected by default.

# Devices are designated by a string that corresponds to the enum in audio.h

global_configuration {

  attached_output_devices AUDIO_DEVICE_OUT_SPEAKER

  default_output_device AUDIO_DEVICE_OUT_SPEAKER

  attached_input_devices AUDIO_DEVICE_IN_BUILTIN_MIC

}

# audio hardware module section: contains descriptors for all audio hw modules present on the

# device. Each hw module node is named after the corresponding hw module library base name.

# For instance, "primary" corresponds to audio.primary..so.

# The "primary" module is mandatory and must include at least one output with

# AUDIO_OUTPUT_FLAG_PRIMARY flag.

# Each module descriptor contains one or more output profile descriptors and zero or more

# input profile descriptors. Each profile lists all the parameters supported by a given output

# or input stream category.

# The "channel_masks", "formats", "devices" and "flags" are specified using strings corresponding

# to enums in audio.h and audio_policy.h. They are concatenated by use of "|" without space or "\n".

audio_hw_modules {

  primary {  // 一个module对应厂家提供的一个so文件

    outputs { // 一个module可以有多个output

      primary { // 一个output里,表明它的参数

        sampling_rates 44100

        channel_masks AUDIO_CHANNEL_OUT_STEREO

        formats AUDIO_FORMAT_PCM_16_BIT

        devices          AUDIO_DEVICE_OUT_SPEAKER|AUDIO_DEVICE_OUT_EARPIECE|AUDIO_DEVICE_OUT_WIRED_HEADSET|AUDIO_DEVICE_OUT_WIRED_HEADPHONE|AUDIO_DEVICE_OUT_ALL_SCO|AUDIO_DEVICE_OUT_AUX_DIGITAL

        flags AUDIO_OUTPUT_FLAG_PRIMARY // 默认设备

      }

    }

inputs { //// 一个module可以有多个input

      primary {

        sampling_rates 8000|11025|12000|16000|22050|24000|32000|44100|48000

        channel_masks AUDIO_CHANNEL_IN_MONO|AUDIO_CHANNEL_IN_STEREO

        formats AUDIO_FORMAT_PCM_16_BIT

        devices AUDIO_DEVICE_IN_BUILTIN_MIC|AUDIO_DEVICE_IN_WIRED_HEADSET|AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET|AUDIO_DEVICE_IN_AUX_DIGITAL|AUDIO_DEVICE_IN_VOICE_CALL

      }

    }

  }

}

AudioPolicyManagerBase.cpp (z:\android-5.0.2\hardware\libhardware_legacy\audio)    

status_t AudioPolicyManagerBase::loadAudioPolicyConfig(const char *path)

{

    cnode *root;

    char *data;

    data = (char *)load_file(path, NULL);

    if (data == NULL) {

        return -ENODEV;

    }

    root = config_node("", "");

    config_load(root, data);

    loadGlobalConfig(root);

    loadHwModules(root);

    config_free(root);

    free(root);

    free(data);

    ALOGI("loadAudioPolicyConfig() loaded %s\n", path);

    return NO_ERROR;

}

4.5 AudioFlinger启动过程分析

笔记分析:

a. 注册AudioFlinger服务

b. 被AudioPolicyService调用以打开厂家提供的so文件

b.1 加载哪个so文件? 文件名是什么? 文件名从何而来?

    名字从/system/etc/audio_policy.conf得到 : primary

    所以so文件就是 : audio.primary.XXX.so, eg. audio.primary.tiny4412.so

b.2 该so文件由什么源文件组成? 查看Android.mk

    audio.primary.$(TARGET_DEVICE) : device/friendly-arm/common/libaudio/AudioHardware.cpp

                                     libhardware_legacy

    libhardware_legacy :     hardware/libhardware_legacy/audio/audio_hw_hal.cpp                                

/work/android-5.0.2/device/friendly-arm/common/libaudio

LOCAL_PATH:= $(call my-dir)

include $(CLEAR_VARS)

LOCAL_SRC_FILES:= \

        AudioHardware.cpp

LOCAL_MODULE := audio.primary.$(TARGET_DEVICE)

LOCAL_MODULE_PATH := $(TARGET_OUT_SHARED_LIBRARIES)/hw

LOCAL_STATIC_LIBRARIES:= libmedia_helper

LOCAL_SHARED_LIBRARIES:= \

        libutils \

        liblog \

        libhardware_legacy \

        libtinyalsa \

        libaudioutils

LOCAL_WHOLE_STATIC_LIBRARIES := libaudiohw_legacy

LOCAL_MODULE_TAGS := optional

LOCAL_SHARED_LIBRARIES += libdl

LOCAL_C_INCLUDES += \

        external/tinyalsa/include \

        system/media/audio_effects/include \

        system/media/audio_utils/include \

        device/friendly-arm/$(TARGET_DEVICE)/conf

ifeq ($(strip $(BOARD_USES_I2S_AUDIO)),true)

  LOCAL_CFLAGS += -DUSES_I2S_AUDIO

endif

ifeq ($(strip $(BOARD_USES_PCM_AUDIO)),true)

  LOCAL_CFLAGS += -DUSES_PCM_AUDIO

endif

ifeq ($(strip $(BOARD_USES_SPDIF_AUDIO)),true)

  LOCAL_CFLAGS += -DUSES_SPDIF_AUDIO

endif

ifeq ($(strip $(USE_ULP_AUDIO)),true)

  LOCAL_CFLAGS += -DUSE_ULP_AUDIO

endif

include $(BUILD_SHARED_LIBRARY)

include $(CLEAR_VARS)

LOCAL_SRC_FILES := AudioPolicyManager.cpp

LOCAL_SHARED_LIBRARIES := libcutils libutils

LOCAL_STATIC_LIBRARIES := libmedia_helper

LOCAL_WHOLE_STATIC_LIBRARIES := libaudiopolicy_legacy

LOCAL_MODULE := audio_policy.$(TARGET_DEVICE)

LOCAL_MODULE_PATH := $(TARGET_OUT_SHARED_LIBRARIES)/hw

LOCAL_MODULE_TAGS := optional

ifeq ($(BOARD_HAVE_BLUETOOTH),true)

  LOCAL_CFLAGS += -DWITH_A2DP

endif

include $(BUILD_SHARED_LIBRARY)

/work/android-5.0.2/hardware/libhardware_legacy/audio

# Copyright 2011 The Android Open Source Project

#AUDIO_POLICY_TEST := true

#ENABLE_AUDIO_DUMP := true

LOCAL_PATH := $(call my-dir)

include $(CLEAR_VARS)

LOCAL_SRC_FILES := \

    AudioHardwareInterface.cpp \

    audio_hw_hal.cpp

LOCAL_MODULE := libaudiohw_legacy

LOCAL_MODULE_TAGS := optional

LOCAL_STATIC_LIBRARIES := libmedia_helper

LOCAL_CFLAGS := -Wno-unused-parameter

include $(BUILD_STATIC_LIBRARY)

include $(CLEAR_VARS)

LOCAL_SRC_FILES := \

    AudioPolicyManagerBase.cpp \

    AudioPolicyCompatClient.cpp \

    audio_policy_hal.cpp

ifeq ($(AUDIO_POLICY_TEST),true)

  LOCAL_CFLAGS += -DAUDIO_POLICY_TEST

endif

LOCAL_STATIC_LIBRARIES := libmedia_helper

LOCAL_MODULE := libaudiopolicy_legacy

LOCAL_MODULE_TAGS := optional

LOCAL_CFLAGS += -Wno-unused-parameter

include $(BUILD_STATIC_LIBRARY)

# The default audio policy, for now still implemented on top of legacy

# policy code

include $(CLEAR_VARS)

LOCAL_SRC_FILES := \

    AudioPolicyManagerDefault.cpp

LOCAL_SHARED_LIBRARIES := \

    libcutils \

    libutils \

    liblog

LOCAL_STATIC_LIBRARIES := \

    libmedia_helper

LOCAL_WHOLE_STATIC_LIBRARIES := \

    libaudiopolicy_legacy

LOCAL_MODULE := audio_policy.default

LOCAL_MODULE_RELATIVE_PATH := hw

LOCAL_MODULE_TAGS := optional

LOCAL_CFLAGS := -Wno-unused-parameter

include $(BUILD_SHARED_LIBRARY)

#ifeq ($(ENABLE_AUDIO_DUMP),true)

#  LOCAL_SRC_FILES += AudioDumpInterface.cpp

#  LOCAL_CFLAGS += -DENABLE_AUDIO_DUMP

#endif

#

#ifeq ($(strip $(BOARD_USES_GENERIC_AUDIO)),true)

#  LOCAL_CFLAGS += -D GENERIC_AUDIO

#endif

#ifeq ($(BOARD_HAVE_BLUETOOTH),true)

#  LOCAL_SRC_FILES += A2dpAudioInterface.cpp

#  LOCAL_SHARED_LIBRARIES += liba2dp

#  LOCAL_C_INCLUDES += $(call include-path-for, bluez)

#

#  LOCAL_CFLAGS += \

#      -DWITH_BLUETOOTH \

#endif

#

#include $(BUILD_SHARED_LIBRARY)

#    AudioHardwareGeneric.cpp \

#    AudioHardwareStub.cpp \

b.3 对硬件的封装:

    AudioFlinger       : AudioHwDevice (放入mAudioHwDevs数组中)

    audio_hw_hal.cpp   : audio_hw_device

    厂家               : AudioHardware (派生自: AudioHardwareInterface)

    AudioHwDevice是对audio_hw_device的封装,

    audio_hw_device中函数的实现要通过AudioHardware类对象

audio_module_handle_t AudioFlinger::loadHwModule(const char *name)

{

    if (name == NULL) {

        return 0;

    }

    if (!settingsAllowed()) {

        return 0;

    }

    Mutex::Autolock _l(mLock);

    return loadHwModule_l(name);

}

// loadHwModule_l() must be called with AudioFlinger::mLock held

audio_module_handle_t AudioFlinger::loadHwModule_l(const char *name)

{

    for (size_t i = 0; i < mAudioHwDevs.size(); i++) {

        if (strncmp(mAudioHwDevs.valueAt(i)->moduleName(), name, strlen(name)) == 0) {

            ALOGW("loadHwModule() module %s already loaded", name);

            return mAudioHwDevs.keyAt(i);

        }

    }

    audio_hw_device_t *dev;

    int rc = load_audio_interface(name, &dev);

    if (rc) {

        ALOGI("loadHwModule() error %d loading module %s ", rc, name);

        return 0;

    }

  ......

    return handle;

}

static int load_audio_interface(const char *if_name, audio_hw_device_t **dev)

{

    const hw_module_t *mod;

    int rc;

//if_name : primary 

//AUDIO_HARDWARE_MODULE_ID : "audio"

    //  audio.promary.XXX.so

    rc = hw_get_module_by_class(AUDIO_HARDWARE_MODULE_ID, if_name, &mod);

    ALOGE_IF(rc, "%s couldn't load audio hw module %s.%s (%s)", __func__,

                 AUDIO_HARDWARE_MODULE_ID, if_name, strerror(-rc));

    if (rc) {

        goto out;

    }

    rc =audio_hw_device_open(mod, dev);

    ALOGE_IF(rc, "%s couldn't open audio hw device in %s.%s (%s)", __func__,

                 AUDIO_HARDWARE_MODULE_ID, if_name, strerror(-rc));

    if (rc) {

        goto out;

    }

    if ((*dev)->common.version < AUDIO_DEVICE_API_VERSION_MIN) {

        ALOGE("%s wrong audio hw device version %04x", __func__, (*dev)->common.version);

        rc = BAD_VALUE;

        goto out;

    }

    return 0;

out:

    *dev = NULL;

    return rc;

}

hardware.c    F:\android_project\android_system_code\hardware\libhardware    

127|shell@tiny4412:/ $ getprop  "ro.hardware"  // 查找属性值

tiny4412

static const char *variant_keys[] = {

    "ro.hardware",  /* This goes first so that it can pick up a different

                       file on the emulator. */

    "ro.product.board",

    "ro.board.platform",

    "ro.arch"

};

int hw_get_module_by_class(const char *class_id, const char *inst,

                           const struct hw_module_t **module)

{

    int i;

    char prop[PATH_MAX];

    char path[PATH_MAX];

    char name[PATH_MAX];

    char prop_name[PATH_MAX];

    if (inst)

        snprintf(name, PATH_MAX, "%s.%s", class_id, inst);

    else

        strlcpy(name, class_id, PATH_MAX);

    /*

     * Here we rely on the fact that calling dlopen multiple times on

     * the same .so will simply increment a refcount (and not load

     * a new copy of the library).

     * We also assume that dlopen() is thread-safe.

     */

    /* First try a property specific to the class and possibly instance */

    snprintf(prop_name, sizeof(prop_name), "ro.hardware.%s", name);

    if (property_get(prop_name, prop, NULL) > 0) {

        if (hw_module_exists(path, sizeof(path), name, prop) == 0) {

            goto found;

        }

    }

    /* Loop through the configuration variants looking for a module */

    for (i=0 ; i

        if (property_get(variant_keys[i], prop, NULL) == 0) {

            continue;

        }

        if (hw_module_exists(path, sizeof(path), name, prop) == 0) {

            goto found;

        }

    }

    /* Nothing found, try the default */

    if (hw_module_exists(path, sizeof(path), name, "default") == 0) {

        goto found;

    }

    return -ENOENT;

found:

    /* load the module, if this fails, we're doomed, and we should not try

     * to load a different variant. */

    return load(class_id, path, module);

}

hardware\libhardware_legacy\audio\audio_hw_hal.cpp

static int legacy_adev_open(const hw_module_t* module, const char* name,

                            hw_device_t** device)

{

    struct legacy_audio_device *ladev;

    int ret;

    if (strcmp(name, AUDIO_HARDWARE_INTERFACE) != 0)

        return -EINVAL;

    ladev = (struct legacy_audio_device *)calloc(1, sizeof(*ladev));

    if (!ladev)

        return -ENOMEM;

    ladev->device.common.tag= HARDWARE_DEVICE_TAG;

    ladev->device.common.version= AUDIO_DEVICE_API_VERSION_2_0;

    ladev->device.common.module= const_cast(module);

    ladev->device.common.close= legacy_adev_close;

    ladev->device.init_check = adev_init_check;

    ladev->device.set_voice_volume = adev_set_voice_volume;

    ladev->device.set_master_volume = adev_set_master_volume;

    ladev->device.get_master_volume = adev_get_master_volume;

    ladev->device.set_mode = adev_set_mode;

    ladev->device.set_mic_mute = adev_set_mic_mute;

    ladev->device.get_mic_mute = adev_get_mic_mute;

    ladev->device.set_parameters = adev_set_parameters;

    ladev->device.get_parameters = adev_get_parameters;

    ladev->device.get_input_buffer_size = adev_get_input_buffer_size;

    ladev->device.open_output_stream = adev_open_output_stream;

    ladev->device.close_output_stream = adev_close_output_stream;

    ladev->device.open_input_stream = adev_open_input_stream;

    ladev->device.close_input_stream = adev_close_input_stream;

    ladev->device.dump = adev_dump;

    ladev->hwif = createAudioHardware();

    if (!ladev->hwif) {

        ret = -EIO;

        goto err_create_audio_hw;

    }

    *device = &ladev->device.common;

    return 0;

err_create_audio_hw:

    free(ladev);

    return ret;

}

c. 被AudioPolicyService调用来open output、创建playback thread

hardware\libhardware_legacy\audio\AudioPolicyManagerBase.cpp

AudioPolicyManagerBase::AudioPolicyManagerBase(AudioPolicyClientInterface *clientInterface)

{

    ......

// open all output streams needed to access attached devices

        // except for direct output streams that are only opened when they are actually

        // required by an app.

        for (size_t j = 0; j < mHwModules[i]->mOutputProfiles.size(); j++)

        {

            const IOProfile *outProfile = mHwModules[i]->mOutputProfiles[j];

            if ((outProfile->mSupportedDevices & mAttachedOutputDevices) &&

                    ((outProfile->mFlags & AUDIO_OUTPUT_FLAG_DIRECT) == 0)) {

                AudioOutputDescriptor *outputDesc = new AudioOutputDescriptor(outProfile);

                outputDesc->mDevice = (audio_devices_t)(mDefaultOutputDevice &

                                                            outProfile->mSupportedDevices);

                audio_io_handle_t output = mpClientInterface->openOutput(

                                                outProfile->mModule->mHandle,

                                                &outputDesc->mDevice,

                                                &outputDesc->mSamplingRate,

                                                &outputDesc->mFormat,

                                                &outputDesc->mChannelMask,

                                                &outputDesc->mLatency,

                                                outputDesc->mFlags);

                if (output == 0) {

                    delete outputDesc;

                } else {

                    mAvailableOutputDevices = (audio_devices_t)(mAvailableOutputDevices |

                                            (outProfile->mSupportedDevices & mAttachedOutputDevices));

                    if (mPrimaryOutput == 0 &&

                            outProfile->mFlags & AUDIO_OUTPUT_FLAG_PRIMARY) {

                        mPrimaryOutput = output;

                    }

                    addOutput(output, outputDesc);

                    setOutputDevice(output,

                                    (audio_devices_t)(mDefaultOutputDevice &

                                                        outProfile->mSupportedDevices),

                                    true);

                }

            }

        }

    }

    ......

}

frameworks\av\services\audioflinger\AudioFlinger.cpp

status_t AudioFlinger::openOutput(audio_module_handle_t module,

                                  audio_io_handle_t *output,

                                  audio_config_t *config,

                                  audio_devices_t *devices,

                                  const String8& address,

                                  uint32_t *latencyMs,

                                  audio_output_flags_t flags)

{

    ALOGV("openOutput(), module %d Device %x, SamplingRate %d, Format %#08x, Channels %x, flags %x",

              module,

              (devices != NULL) ? *devices : 0,

              config->sample_rate,

              config->format,

              config->channel_mask,

              flags);

    if (*devices == AUDIO_DEVICE_NONE) {

        return BAD_VALUE;

    }

    Mutex::Autolock _l(mLock);

   // 创建一个播放线程

    sp thread = openOutput_l(module, output, config, *devices, address, flags);

    if (thread != 0) {

        *latencyMs = thread->latency();

        // notify client processes of the new output creation

        thread->audioConfigChanged(AudioSystem::OUTPUT_OPENED);

        // the first primary output opened designates the primary hw device

        if ((mPrimaryHardwareDev == NULL) && (flags & AUDIO_OUTPUT_FLAG_PRIMARY)) {

            ALOGI("Using module %d has the primary audio interface", module);

            mPrimaryHardwareDev = thread->getOutput()->audioHwDev;

            AutoMutex lock(mHardwareLock);

            mHardwareStatus = AUDIO_HW_SET_MODE;

            mPrimaryHardwareDev->hwDevice()->set_mode(mPrimaryHardwareDev->hwDevice(), mMode);

            mHardwareStatus = AUDIO_HW_IDLE;

            mPrimaryOutputSampleRate = config->sample_rate;

        }

        return NO_ERROR;

    }

    return NO_INIT;

}

// ----------------------------------------------------------------------------

sp AudioFlinger::openOutput_l(audio_module_handle_t module,

                                                            audio_io_handle_t *output,

                                                            audio_config_t *config,

                                                            audio_devices_t devices,

                                                            const String8& address,

                                                            audio_output_flags_t flags)

{

    AudioHwDevice *outHwDev = findSuitableHwDev_l(module, devices);

    if (outHwDev == NULL) {

        return 0;

    }

    audio_hw_device_t *hwDevHal = outHwDev->hwDevice();

    if (*output == AUDIO_IO_HANDLE_NONE) {

        *output = nextUniqueId();

    }

    mHardwareStatus = AUDIO_HW_OUTPUT_OPEN;

    audio_stream_out_t *outStream = NULL;

    // FOR TESTING ONLY:

    // This if statement allows overriding the audio policy settings

    // and forcing a specific format or channel mask to the HAL/Sink device for testing.

    if (!(flags & (AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD | AUDIO_OUTPUT_FLAG_DIRECT))) {

        // Check only for Normal Mixing mode

        if (kEnableExtendedPrecision) {

            // Specify format (uncomment one below to choose)

            //config->format = AUDIO_FORMAT_PCM_FLOAT;

            //config->format = AUDIO_FORMAT_PCM_24_BIT_PACKED;

            //config->format = AUDIO_FORMAT_PCM_32_BIT;

            //config->format = AUDIO_FORMAT_PCM_8_24_BIT;

            // ALOGV("openOutput_l() upgrading format to %#08x", config->format);

        }

        if (kEnableExtendedChannels) {

            // Specify channel mask (uncomment one below to choose)

            //config->channel_mask = audio_channel_out_mask_from_count(4);  // for USB 4ch

            //config->channel_mask = audio_channel_mask_from_representation_and_bits(

            //        AUDIO_CHANNEL_REPRESENTATION_INDEX, (1 << 4) - 1);  // another 4ch example

        }

    }

    status_t status = hwDevHal->open_output_stream(hwDevHal,

                                                   *output,

                                                   devices,

                                                   flags,

                                                   config,

                                                   &outStream,

                                                   address.string());

    mHardwareStatus = AUDIO_HW_IDLE;

    ALOGV("openOutput_l() openOutputStream returned output %p, sampleRate %d, Format %#x, "

            "channelMask %#x, status %d",

            outStream,

            config->sample_rate,

            config->format,

            config->channel_mask,

            status);

    if (status == NO_ERROR && outStream != NULL) {

        AudioStreamOut *outputStream = new AudioStreamOut(outHwDev, outStream, flags);

        PlaybackThread *thread;

        if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {

            thread = new OffloadThread(this, outputStream, *output, devices);

            ALOGV("openOutput_l() created offload output: ID %d thread %p", *output, thread);

        } else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)

                || !isValidPcmSinkFormat(config->format)

                || !isValidPcmSinkChannelMask(config->channel_mask)) {

            thread = new DirectOutputThread(this, outputStream, *output, devices);

            ALOGV("openOutput_l() created direct output: ID %d thread %p", *output, thread);

        } else {

           // 创建线程

            thread = new MixerThread(this, outputStream, *output, devices);

            ALOGV("openOutput_l() created mixer output: ID %d thread %p", *output, thread);

        }

        // 将线程添加到数组里

        mPlaybackThreads.add(*output, thread);

        return thread;

    }

    return 0;

}

总结:

 打开厂家提供的so文件

        由AudioFlinger实现:

            1>   从mHwModule取出每一个HwModule,根据name打开so文件

            2>  构造硬件封装对象

            AudioFlinger:

     3.   打开module里的output,创建playbackThread:

        由AudioFlinger实现: 对每一个Module中每一个output profile:

4 把outputDesc放入AudioPolicyManager

    .mOutputs表示"已经打开的output"

以后可以根据一个整数(output)找到对应的outputDesc

硬件封装:

AudioFlinger.cpp  : AudioHwDevice   ==> 对应一个Module(so文件)所支持的设备             

                                           |

audio_hw_hal.cpp:   audio_hw_device

                                           |

AudioHardware.cpp :  AudioHardWare    

4.6 AudioTrack创建过程概述

a. 体验测试程序: frameworks/base/media/tests/audiotests/shared_mem_test.cpp

frameworks/base/media/tests/mediaframeworktest/src/com/android/mediaframeworktest/functional/audio/MediaAudioTrackTest.java

    public void testSetStereoVolumeMax() throws Exception {

        // constants for test

        final String TEST_NAME = "testSetStereoVolumeMax";

        final int TEST_SR = 22050;

        final int TEST_CONF = AudioFormat.CHANNEL_OUT_STEREO;

        final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;

        final int TEST_MODE = AudioTrack.MODE_STREAM;

        final int TEST_STREAM_TYPE = AudioManager.STREAM_MUSIC;

        //-------- initialization --------------

        int minBuffSize = AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT);

        AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT,

                minBuffSize, TEST_MODE);

        byte data[] = new byte[minBuffSize/2];

        //--------    test        --------------

        track.write(data, 0, data.length);

        track.write(data, 0, data.length);

        track.play();

        float maxVol = AudioTrack.getMaxVolume();

        assertTrue(TEST_NAME, track.setStereoVolume(maxVol, maxVol) == AudioTrack.SUCCESS);

        //-------- tear down      --------------

        track.release();

    }

frameworks\base\media\java\android\media\AudioTrack.java

    /**

     * Class constructor with {@link AudioAttributes} and {@link AudioFormat}.

     * @param attributes a non-null {@link AudioAttributes} instance.

     * @param format a non-null {@link AudioFormat} instance describing the format of the data

     *     that will be played through this AudioTrack. See {@link AudioFormat.Builder} for

     *     configuring the audio format parameters such as encoding, channel mask and sample rate.

     * @param bufferSizeInBytes the total size (in bytes) of the buffer where audio data is read

     *   from for playback. If using the AudioTrack in streaming mode, you can write data into

     *   this buffer in smaller chunks than this size. If using the AudioTrack in static mode,

     *   this is the maximum size of the sound that will be played for this instance.

     *   See {@link #getMinBufferSize(int, int, int)} to determine the minimum required buffer size

     *   for the successful creation of an AudioTrack instance in streaming mode. Using values

     *   smaller than getMinBufferSize() will result in an initialization failure.

     * @param mode streaming or static buffer. See {@link #MODE_STATIC} and {@link #MODE_STREAM}.

     * @param sessionId ID of audio session the AudioTrack must be attached to, or

     *   {@link AudioManager#AUDIO_SESSION_ID_GENERATE} if the session isn't known at construction

     *   time. See also {@link AudioManager#generateAudioSessionId()} to obtain a session ID before

     *   construction.

     * @throws IllegalArgumentException

     */

    public AudioTrack(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,

            int mode, int sessionId)

                    throws IllegalArgumentException {

        // mState already == STATE_UNINITIALIZED

        if (attributes == null) {

            throw new IllegalArgumentException("Illegal null AudioAttributes");

        }

        if (format == null) {

            throw new IllegalArgumentException("Illegal null AudioFormat");

        }

        // remember which looper is associated with the AudioTrack instantiation

        Looper looper;

        if ((looper = Looper.myLooper()) == null) {

            looper = Looper.getMainLooper();

        }

        int rate = 0;

        if ((format.getPropertySetMask() & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_SAMPLE_RATE) != 0)

        {

            rate = format.getSampleRate();

        } else {

            rate = AudioSystem.getPrimaryOutputSamplingRate();

            if (rate <= 0) {

                rate = 44100;

            }

        }

        int channelMask = AudioFormat.CHANNEL_OUT_FRONT_LEFT | AudioFormat.CHANNEL_OUT_FRONT_RIGHT;

        if ((format.getPropertySetMask() & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_CHANNEL_MASK) != 0)

        {

            channelMask = format.getChannelMask();

        }

        int encoding = AudioFormat.ENCODING_DEFAULT;

        if ((format.getPropertySetMask() & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_ENCODING) != 0) {

            encoding = format.getEncoding();

        }

        audioParamCheck(rate, channelMask, encoding, mode);

        mStreamType = AudioSystem.STREAM_DEFAULT;

        audioBuffSizeCheck(bufferSizeInBytes);

        mInitializationLooper = looper;

        IBinder b = ServiceManager.getService(Context.APP_OPS_SERVICE);

        mAppOps = IAppOpsService.Stub.asInterface(b);

        mAttributes = (new AudioAttributes.Builder(attributes).build());

        if (sessionId < 0) {

            throw new IllegalArgumentException("Invalid audio session ID: "+sessionId);

        }

        int[] session = new int[1];

        session[0] = sessionId;

        // native initialization

        int initResult = native_setup(new WeakReference(this), mAttributes,

                mSampleRate, mChannels, mAudioFormat,

                mNativeBufferSizeInBytes, mDataLoadMode, session);

        if (initResult != SUCCESS) {

            loge("Error code "+initResult+" when initializing AudioTrack.");

            return; // with mState == STATE_UNINITIALIZED

        }

        mSessionId = session[0];

        if (mDataLoadMode == MODE_STATIC) {

            mState = STATE_NO_STATIC_DATA;

        } else {

            mState = STATE_INITIALIZED;

        }

    }

frameworks\av\media\libmedia\AudioTrack.cpp

AudioTrack::AudioTrack(

        audio_stream_type_t streamType,

        uint32_t sampleRate,

        audio_format_t format,

        audio_channel_mask_t channelMask,

        size_t frameCount,

        audio_output_flags_t flags,

        callback_t cbf,

        void* user,

        uint32_t notificationFrames,

        int sessionId,

        transfer_type transferType,

        const audio_offload_info_t *offloadInfo,

        int uid,

        pid_t pid,

        const audio_attributes_t* pAttributes)

    : mStatus(NO_INIT),

      mIsTimed(false),

      mPreviousPriority(ANDROID_PRIORITY_NORMAL),

      mPreviousSchedulingGroup(SP_DEFAULT),

      mPausedPosition(0)

{

    mStatus = set(streamType, sampleRate, format, channelMask,

            frameCount, flags, cbf, user, notificationFrames,

            0 /*sharedBuffer*/, false /*threadCanCallJava*/, sessionId, transferType,

            offloadInfo, uid, pid, pAttributes);

}

status_t AudioTrack::set(

        audio_stream_type_t streamType,

        uint32_t sampleRate,

        audio_format_t format,

        audio_channel_mask_t channelMask,

        size_t frameCount,

        audio_output_flags_t flags,

        callback_t cbf,

        void* user,

        uint32_t notificationFrames,

        const sp& sharedBuffer,

        bool threadCanCallJava,

        int sessionId,

        transfer_type transferType,

        const audio_offload_info_t *offloadInfo,

        int uid,

        pid_t pid,

        const audio_attributes_t* pAttributes)

{

    ALOGV("set(): streamType %d, sampleRate %u, format %#x, channelMask %#x, frameCount %zu, "

          "flags #%x, notificationFrames %u, sessionId %d, transferType %d",

          streamType, sampleRate, format, channelMask, frameCount, flags, notificationFrames,

          sessionId, transferType);

    switch (transferType) {

    case TRANSFER_DEFAULT:

        if (sharedBuffer != 0) {

            transferType = TRANSFER_SHARED;

        } else if (cbf == NULL || threadCanCallJava) {

            transferType = TRANSFER_SYNC;

        } else {

            transferType = TRANSFER_CALLBACK;

        }

        break;

    case TRANSFER_CALLBACK:

        if (cbf == NULL || sharedBuffer != 0) {

            ALOGE("Transfer type TRANSFER_CALLBACK but cbf == NULL || sharedBuffer != 0");

            return BAD_VALUE;

        }

        break;

    case TRANSFER_OBTAIN:

    case TRANSFER_SYNC:

        if (sharedBuffer != 0) {

            ALOGE("Transfer type TRANSFER_OBTAIN but sharedBuffer != 0");

            return BAD_VALUE;

        }

        break;

    case TRANSFER_SHARED:

        if (sharedBuffer == 0) {

            ALOGE("Transfer type TRANSFER_SHARED but sharedBuffer == 0");

            return BAD_VALUE;

        }

        break;

    default:

        ALOGE("Invalid transfer type %d", transferType);

        return BAD_VALUE;

    }

    mSharedBuffer = sharedBuffer;

    mTransfer = transferType;

    ALOGV_IF(sharedBuffer != 0, "sharedBuffer: %p, size: %d", sharedBuffer->pointer(),

            sharedBuffer->size());

    ALOGV("set() streamType %d frameCount %zu flags %04x", streamType, frameCount, flags);

    AutoMutex lock(mLock);

    // invariant that mAudioTrack != 0 is true only after set() returns successfully

    if (mAudioTrack != 0) {

        ALOGE("Track already in use");

        return INVALID_OPERATION;

    }

    // handle default values first.

    if (streamType == AUDIO_STREAM_DEFAULT) {

        streamType = AUDIO_STREAM_MUSIC;

    }

    if (pAttributes == NULL) {

        if (uint32_t(streamType) >= AUDIO_STREAM_CNT) {

            ALOGE("Invalid stream type %d", streamType);

            return BAD_VALUE;

        }

        setAttributesFromStreamType(streamType);

        mStreamType = streamType;

    } else {

        if (!isValidAttributes(pAttributes)) {

            ALOGE("Invalid attributes: usage=%d content=%d flags=0x%x tags=[%s]",

                pAttributes->usage, pAttributes->content_type, pAttributes->flags,

                pAttributes->tags);

        }

        // stream type shouldn't be looked at, this track has audio attributes

        memcpy(&mAttributes, pAttributes, sizeof(audio_attributes_t));

        setStreamTypeFromAttributes(mAttributes);

        ALOGV("Building AudioTrack with attributes: usage=%d content=%d flags=0x%x tags=[%s]",

                mAttributes.usage, mAttributes.content_type, mAttributes.flags, mAttributes.tags);

    }

    status_t status;

    if (sampleRate == 0) {

        status = AudioSystem::getOutputSamplingRateForAttr(&sampleRate, &mAttributes);

        if (status != NO_ERROR) {

            ALOGE("Could not get output sample rate for stream type %d; status %d",

                    mStreamType, status);

            return status;

        }

    }

    mSampleRate = sampleRate;

    // these below should probably come from the audioFlinger too...

    if (format == AUDIO_FORMAT_DEFAULT) {

        format = AUDIO_FORMAT_PCM_16_BIT;

    }

    // validate parameters

    if (!audio_is_valid_format(format)) {

        ALOGE("Invalid format %#x", format);

        return BAD_VALUE;

    }

    mFormat = format;

    if (!audio_is_output_channel(channelMask)) {

        ALOGE("Invalid channel mask %#x", channelMask);

        return BAD_VALUE;

    }

    mChannelMask = channelMask;

    uint32_t channelCount = audio_channel_count_from_out_mask(channelMask);

    mChannelCount = channelCount;

    // AudioFlinger does not currently support 8-bit data in shared memory

    if (format == AUDIO_FORMAT_PCM_8_BIT && sharedBuffer != 0) {

        ALOGE("8-bit data in shared memory is not supported");

        return BAD_VALUE;

    }

    // force direct flag if format is not linear PCM

    // or offload was requested

    if ((flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD)

            || !audio_is_linear_pcm(format)) {

        ALOGV( (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD)

                    ? "Offload request, forcing to Direct Output"

                    : "Not linear PCM, forcing to Direct Output");

        flags = (audio_output_flags_t)

                // FIXME why can't we allow direct AND fast?

                ((flags | AUDIO_OUTPUT_FLAG_DIRECT) & ~AUDIO_OUTPUT_FLAG_FAST);

    }

    // only allow deep buffering for music stream type

    if (mStreamType != AUDIO_STREAM_MUSIC) {

        flags = (audio_output_flags_t)(flags &~AUDIO_OUTPUT_FLAG_DEEP_BUFFER);

    }

    if (flags & AUDIO_OUTPUT_FLAG_DIRECT) {

        if (audio_is_linear_pcm(format)) {

            mFrameSize = channelCount * audio_bytes_per_sample(format);

        } else {

            mFrameSize = sizeof(uint8_t);

        }

        mFrameSizeAF = mFrameSize;

    } else {

        ALOG_ASSERT(audio_is_linear_pcm(format));

        mFrameSize = channelCount * audio_bytes_per_sample(format);

        mFrameSizeAF = channelCount * audio_bytes_per_sample(

                format == AUDIO_FORMAT_PCM_8_BIT ? AUDIO_FORMAT_PCM_16_BIT : format);

        // createTrack will return an error if PCM format is not supported by server,

        // so no need to check for specific PCM formats here

    }

    // Make copy of input parameter offloadInfo so that in the future:

    //  (a) createTrack_l doesn't need it as an input parameter

    //  (b) we can support re-creation of offloaded tracks

    if (offloadInfo != NULL) {

        mOffloadInfoCopy = *offloadInfo;

        mOffloadInfo = &mOffloadInfoCopy;

    } else {

        mOffloadInfo = NULL;

    }

    mVolume[AUDIO_INTERLEAVE_LEFT] = 1.0f;

    mVolume[AUDIO_INTERLEAVE_RIGHT] = 1.0f;

    mSendLevel = 0.0f;

    // mFrameCount is initialized in createTrack_l

    mReqFrameCount = frameCount;

    mNotificationFramesReq = notificationFrames;

    mNotificationFramesAct = 0;

    mSessionId = sessionId;

    int callingpid = IPCThreadState::self()->getCallingPid();

    int mypid = getpid();

    if (uid == -1 || (callingpid != mypid)) {

        mClientUid = IPCThreadState::self()->getCallingUid();

    } else {

        mClientUid = uid;

    }

    if (pid == -1 || (callingpid != mypid)) {

        mClientPid = callingpid;

    } else {

        mClientPid = pid;

    }

    mAuxEffectId = 0;

    mFlags = flags;

    mCbf = cbf;

    if (cbf != NULL) {

        mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);

        mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);

    }

    // create the IAudioTrack

    status = createTrack_l();

    if (status != NO_ERROR) {

        if (mAudioTrackThread != 0) {

            mAudioTrackThread->requestExit();   // see comment in AudioTrack.h

            mAudioTrackThread->requestExitAndWait();

            mAudioTrackThread.clear();

        }

        return status;

    }

    mStatus = NO_ERROR;

    mState = STATE_STOPPED;

    mUserData = user;

    mLoopPeriod = 0;

    mMarkerPosition = 0;

    mMarkerReached = false;

    mNewPosition = 0;

    mUpdatePeriod = 0;

    mServer = 0;

    mPosition = 0;

    mReleased = 0;

    mStartUs = 0;

    AudioSystem::acquireAudioSessionId(mSessionId, mClientPid);

    mSequence = 1;

    mObservedSequence = mSequence;

    mInUnderrun = false;

    return NO_ERROR;

}

播放声音时都要创建AudioTrack对象,

java的AudioTrack对象创建时会导致c++的AudioTrack对象被创建;

所以分析的核心是c++的AudioTrack类,

创建AudioTrack时涉及一个重要函数: set

b. 猜测创建过程的主要工作

b.1 使用AudioTrack的属性, 根据AudioPolicy找到对应的output、playbackThread

b.2 在playbackThread中创建对应的track

b.3 APP的AudioTrack 和 playbackThread的mTracks中的track之间建立共享内存

c. 源码时序图

4.7 AudioPolicyManager堪误与回顾

frameworks\av\services\audiopolicy\AudioPolicyService.cpp

void AudioPolicyService::onFirstRef()

{

    char value[PROPERTY_VALUE_MAX];

    const struct hw_module_t *module;

    int forced_val;

    int rc;

    {

        Mutex::Autolock _l(mLock);

        // start tone playback thread

        mTonePlaybackThread = new AudioCommandThread(String8("ApmTone"), this);

        // start audio commands thread

        mAudioCommandThread = new AudioCommandThread(String8("ApmAudio"), this);

        // start output activity command thread

        mOutputCommandThread = new AudioCommandThread(String8("ApmOutput"), this);

#ifdef USE_LEGACY_AUDIO_POLICY  // 使用旧的的策略

        ALOGI("AudioPolicyService CSTOR in legacy mode");

        /* instantiate the audio policy manager */

        rc = hw_get_module(AUDIO_POLICY_HARDWARE_MODULE_ID, &module);

        if (rc) {

            return;

        }

        rc = audio_policy_dev_open(module, &mpAudioPolicyDev);

        ALOGE_IF(rc, "couldn't open audio policy device (%s)", strerror(-rc));

        if (rc) {

            return;

        }

        rc = mpAudioPolicyDev->create_audio_policy(mpAudioPolicyDev, &aps_ops, this,

                                                   &mpAudioPolicy);

        ALOGE_IF(rc, "couldn't create audio policy (%s)", strerror(-rc));

        if (rc) {

            return;

        }

        rc = mpAudioPolicy->init_check(mpAudioPolicy);

        ALOGE_IF(rc, "couldn't init_check the audio policy (%s)", strerror(-rc));

        if (rc) {

            return;

        }

        ALOGI("Loaded audio policy from %s (%s)", module->name, module->id);

#else

        ALOGI("AudioPolicyService CSTOR in new mode");

        mAudioPolicyClient = new AudioPolicyClient(this);

        mAudioPolicyManager = createAudioPolicyManager(mAudioPolicyClient);

#endif

    }

    // load audio processing modules

    spaudioPolicyEffects = new AudioPolicyEffects();

    {

        Mutex::Autolock _l(mLock);

        mAudioPolicyEffects = audioPolicyEffects;

    }

}

AudioPolicyFactory.cpp (z:\android-5.0.2\frameworks\av\services\audiopolicy)    

extern "C" AudioPolicyInterface* createAudioPolicyManager(

        AudioPolicyClientInterface *clientInterface)

{

    returnnew AudioPolicyManager(clientInterface);

}

AudioPolicyManager.cpp (z:\android-5.0.2\frameworks\av\services\audiopolicy)   

AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface)

    :

#ifdef AUDIO_POLICY_TEST

    Thread(false),

#endif //AUDIO_POLICY_TEST

    mPrimaryOutput((audio_io_handle_t)0),

    mPhoneState(AUDIO_MODE_NORMAL),

    mLimitRingtoneVolume(false), mLastVoiceVolume(-1.0f),

    mTotalEffectsCpuLoad(0), mTotalEffectsMemory(0),

    mA2dpSuspended(false),

    mSpeakerDrcEnabled(false), mNextUniqueId(1),

    mAudioPortGeneration(1)

{

    mUidCached = getuid();

    mpClientInterface = clientInterface;

    for (int i = 0; i < AUDIO_POLICY_FORCE_USE_CNT; i++) {

        mForceUse[i] = AUDIO_POLICY_FORCE_NONE;

    }

    mDefaultOutputDevice = new DeviceDescriptor(String8(""), AUDIO_DEVICE_OUT_SPEAKER);

    if (loadAudioPolicyConfig(AUDIO_POLICY_VENDOR_CONFIG_FILE) != NO_ERROR) {

        if (loadAudioPolicyConfig(AUDIO_POLICY_CONFIG_FILE) != NO_ERROR) {

            ALOGE("could not load audio policy configuration file, setting defaults");

            defaultAudioPolicyConfig();

        }

    }

    // mAvailableOutputDevices and mAvailableInputDevices now contain all attached devices

    // must be done after reading the policy

    initializeVolumeCurves();

    // open all output streams needed to access attached devices

    audio_devices_t outputDeviceTypes = mAvailableOutputDevices.types();

    audio_devices_t inputDeviceTypes = mAvailableInputDevices.types() & ~AUDIO_DEVICE_BIT_IN;

    for (size_t i = 0; i < mHwModules.size(); i++) {

        mHwModules[i]->mHandle = mpClientInterface->loadHwModule(mHwModules[i]->mName);

        if (mHwModules[i]->mHandle == 0) {

            ALOGW("could not open HW module %s", mHwModules[i]->mName);

            continue;

        }

        // open all output streams needed to access attached devices

        // except for direct output streams that are only opened when they are actually

        // required by an app.

        // This also validates mAvailableOutputDevices list

        for (size_t j = 0; j < mHwModules[i]->mOutputProfiles.size(); j++)

        {

            const sp outProfile = mHwModules[i]->mOutputProfiles[j];

            if (outProfile->mSupportedDevices.isEmpty()) {

                ALOGW("Output profile contains no device on module %s", mHwModules[i]->mName);

                continue;

            }

            if ((outProfile->mFlags & AUDIO_OUTPUT_FLAG_DIRECT) != 0) {

                continue;

            }

            audio_devices_t profileType = outProfile->mSupportedDevices.types();

            if ((profileType & mDefaultOutputDevice->mDeviceType) != AUDIO_DEVICE_NONE) {

                profileType = mDefaultOutputDevice->mDeviceType;

            } else {

                // chose first device present in mSupportedDevices also part of

                // outputDeviceTypes

                for (size_t k = 0; k  < outProfile->mSupportedDevices.size(); k++) {

                    profileType = outProfile->mSupportedDevices[k]->mDeviceType;

                    if ((profileType & outputDeviceTypes) != 0) {

                        break;

                    }

                }

            }

            if ((profileType & outputDeviceTypes) == 0) {

                continue;

            }

            sp outputDesc = new AudioOutputDescriptor(outProfile);

            outputDesc->mDevice = profileType;

            audio_config_t config = AUDIO_CONFIG_INITIALIZER;

            config.sample_rate = outputDesc->mSamplingRate;

            config.channel_mask = outputDesc->mChannelMask;

            config.format = outputDesc->mFormat;

            audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;

            status_t status = mpClientInterface->openOutput(outProfile->mModule->mHandle,

                                                            &output,

                                                            &config,

                                                            &outputDesc->mDevice,

                                                            String8(""),

                                                            &outputDesc->mLatency,

                                                            outputDesc->mFlags);

            if (status != NO_ERROR) {

                ALOGW("Cannot open output stream for device %08x on hw module %s",

                      outputDesc->mDevice,

                      mHwModules[i]->mName);

            } else {

                outputDesc->mSamplingRate = config.sample_rate;

                outputDesc->mChannelMask = config.channel_mask;

                outputDesc->mFormat = config.format;

                for (size_t k = 0; k  < outProfile->mSupportedDevices.size(); k++) {

                    audio_devices_t type = outProfile->mSupportedDevices[k]->mDeviceType;

                    ssize_t index =

                            mAvailableOutputDevices.indexOf(outProfile->mSupportedDevices[k]);

                    // give a valid ID to an attached device once confirmed it is reachable

                    if ((index >= 0) && (mAvailableOutputDevices[index]->mId == 0)) {

                        mAvailableOutputDevices[index]->mId = nextUniqueId();

                        mAvailableOutputDevices[index]->mModule = mHwModules[i];

                    }

                }

                if (mPrimaryOutput == 0 &&

                        outProfile->mFlags & AUDIO_OUTPUT_FLAG_PRIMARY) {

                    mPrimaryOutput = output;

                }

                addOutput(output, outputDesc);

                setOutputDevice(output,

                                outputDesc->mDevice,

                                true);

            }

        }

        // open input streams needed to access attached devices to validate

        // mAvailableInputDevices list

        for (size_t j = 0; j < mHwModules[i]->mInputProfiles.size(); j++)

        {

            const sp inProfile = mHwModules[i]->mInputProfiles[j];

            if (inProfile->mSupportedDevices.isEmpty()) {

                ALOGW("Input profile contains no device on module %s", mHwModules[i]->mName);

                continue;

            }

            // chose first device present in mSupportedDevices also part of

            // inputDeviceTypes

            audio_devices_t profileType = AUDIO_DEVICE_NONE;

            for (size_t k = 0; k  < inProfile->mSupportedDevices.size(); k++) {

                profileType = inProfile->mSupportedDevices[k]->mDeviceType;

                if (profileType & inputDeviceTypes) {

                    break;

                }

            }

            if ((profileType & inputDeviceTypes) == 0) {

                continue;

            }

            sp inputDesc = new AudioInputDescriptor(inProfile);

            inputDesc->mInputSource = AUDIO_SOURCE_MIC;

            inputDesc->mDevice = profileType;

            audio_config_t config = AUDIO_CONFIG_INITIALIZER;

            config.sample_rate = inputDesc->mSamplingRate;

            config.channel_mask = inputDesc->mChannelMask;

            config.format = inputDesc->mFormat;

            audio_io_handle_t input = AUDIO_IO_HANDLE_NONE;

            status_t status = mpClientInterface->openInput(inProfile->mModule->mHandle,

                                                           &input,

                                                           &config,

                                                           &inputDesc->mDevice,

                                                           String8(""),

                                                           AUDIO_SOURCE_MIC,

                                                           AUDIO_INPUT_FLAG_NONE);

            if (status == NO_ERROR) {

                for (size_t k = 0; k  < inProfile->mSupportedDevices.size(); k++) {

                    audio_devices_t type = inProfile->mSupportedDevices[k]->mDeviceType;

                    ssize_t index =

                            mAvailableInputDevices.indexOf(inProfile->mSupportedDevices[k]);

                    // give a valid ID to an attached device once confirmed it is reachable

                    if ((index >= 0) && (mAvailableInputDevices[index]->mId == 0)) {

                        mAvailableInputDevices[index]->mId = nextUniqueId();

                        mAvailableInputDevices[index]->mModule = mHwModules[i];

                    }

                }

                mpClientInterface->closeInput(input);

            } else {

                ALOGW("Cannot open input stream for device %08x on hw module %s",

                      inputDesc->mDevice,

                      mHwModules[i]->mName);

            }

        }

    }

    // make sure all attached devices have been allocated a unique ID

    for (size_t i = 0; i  < mAvailableOutputDevices.size();) {

        if (mAvailableOutputDevices[i]->mId == 0) {

            ALOGW("Input device %08x unreachable", mAvailableOutputDevices[i]->mDeviceType);

            mAvailableOutputDevices.remove(mAvailableOutputDevices[i]);

            continue;

        }

        i++;

    }

    for (size_t i = 0; i  < mAvailableInputDevices.size();) {

        if (mAvailableInputDevices[i]->mId == 0) {

            ALOGW("Input device %08x unreachable", mAvailableInputDevices[i]->mDeviceType);

            mAvailableInputDevices.remove(mAvailableInputDevices[i]);

            continue;

        }

        i++;

    }

    // make sure default device is reachable

    if (mAvailableOutputDevices.indexOf(mDefaultOutputDevice) < 0) {

        ALOGE("Default device %08x is unreachable", mDefaultOutputDevice->mDeviceType);

    }

    ALOGE_IF((mPrimaryOutput == 0), "Failed to open primary output");

    updateDevicesAndOutputs();

#ifdef AUDIO_POLICY_TEST

    if (mPrimaryOutput != 0) {

        AudioParameter outputCmd = AudioParameter();

        outputCmd.addInt(String8("set_id"), 0);

        mpClientInterface->setParameters(mPrimaryOutput, outputCmd.toString());

        mTestDevice = AUDIO_DEVICE_OUT_SPEAKER;

        mTestSamplingRate = 44100;

        mTestFormat = AUDIO_FORMAT_PCM_16_BIT;

        mTestChannels =  AUDIO_CHANNEL_OUT_STEREO;

        mTestLatencyMs = 0;

        mCurOutput = 0;

        mDirectOutput = false;

        for (int i = 0; i < NUM_TEST_OUTPUTS; i++) {

            mTestOutputs[i] = 0;

        }

        const size_t SIZE = 256;

        char buffer[SIZE];

        snprintf(buffer, SIZE, "AudioPolicyManagerTest");

        run(buffer, ANDROID_PRIORITY_AUDIO);

    }

#endif //AUDIO_POLICY_TEST

}

void AudioPolicyManager::addOutput(audio_io_handle_t output, sp outputDesc)

{

    outputDesc->mIoHandle = output;

    outputDesc->mId = nextUniqueId();

    mOutputs.add(output, outputDesc);

    nextAudioPortGeneration();

}

4.8 AudioTrack创建过程_选择output

a. APP构造AudioTrack时指定了 stream type

b. AudioTrack::setAttributesFromStreamType

c. AudioPolicyManager::getStrategyForAttr

d. AudioPolicyManager::getDeviceForStrategy

e. AudioPolicyManager::getOutputForDevice

       e.1 AudioPolicyManager::getOutputsForDevice

       e.2 output = selectOutput(outputs, flags, format);

AudioTrack.cpp (z:\android-5.0.2\frameworks\av\media\libmedia)   

void AudioTrack::setAttributesFromStreamType(audio_stream_type_t streamType) {

    mAttributes.flags = 0x0;

    switch(streamType) {

    case AUDIO_STREAM_DEFAULT:

    case AUDIO_STREAM_MUSIC:

        mAttributes.content_type = AUDIO_CONTENT_TYPE_MUSIC;

        mAttributes.usage = AUDIO_USAGE_MEDIA;

        break;

    case AUDIO_STREAM_VOICE_CALL:

        mAttributes.content_type = AUDIO_CONTENT_TYPE_SPEECH;

        mAttributes.usage = AUDIO_USAGE_VOICE_COMMUNICATION;

        break;

    case AUDIO_STREAM_ENFORCED_AUDIBLE:

        mAttributes.flags  |= AUDIO_FLAG_AUDIBILITY_ENFORCED;

        // intended fall through, attributes in common with STREAM_SYSTEM

    case AUDIO_STREAM_SYSTEM:

        mAttributes.content_type = AUDIO_CONTENT_TYPE_SONIFICATION;

        mAttributes.usage = AUDIO_USAGE_ASSISTANCE_SONIFICATION;

        break;

    case AUDIO_STREAM_RING:

        mAttributes.content_type = AUDIO_CONTENT_TYPE_SONIFICATION;

        mAttributes.usage = AUDIO_USAGE_NOTIFICATION_TELEPHONY_RINGTONE;

        break;

    case AUDIO_STREAM_ALARM:

        mAttributes.content_type = AUDIO_CONTENT_TYPE_SONIFICATION;

        mAttributes.usage = AUDIO_USAGE_ALARM;

        break;

    case AUDIO_STREAM_NOTIFICATION:

        mAttributes.content_type = AUDIO_CONTENT_TYPE_SONIFICATION;

        mAttributes.usage = AUDIO_USAGE_NOTIFICATION;

        break;

    case AUDIO_STREAM_BLUETOOTH_SCO:

        mAttributes.content_type = AUDIO_CONTENT_TYPE_SPEECH;

        mAttributes.usage = AUDIO_USAGE_VOICE_COMMUNICATION;

        mAttributes.flags |= AUDIO_FLAG_SCO;

        break;

    case AUDIO_STREAM_DTMF:

        mAttributes.content_type = AUDIO_CONTENT_TYPE_SONIFICATION;

        mAttributes.usage = AUDIO_USAGE_VOICE_COMMUNICATION_SIGNALLING;

        break;

    case AUDIO_STREAM_TTS:

        mAttributes.content_type = AUDIO_CONTENT_TYPE_SPEECH;

        mAttributes.usage = AUDIO_USAGE_ASSISTANCE_ACCESSIBILITY;

        break;

    default:

        ALOGE("invalid stream type %d when converting to attributes", streamType);

    }

}

AudioPolicyManager.h (z:\android-5.0.2\frameworks\av\services\audiopolicy)   

uint32_t AudioPolicyManager::getStrategyForAttr(const audio_attributes_t *attr) {

    // flags to strategy mapping

    if ((attr->flags & AUDIO_FLAG_AUDIBILITY_ENFORCED) == AUDIO_FLAG_AUDIBILITY_ENFORCED) {

        return (uint32_t) STRATEGY_ENFORCED_AUDIBLE;

    }

    // usage to strategy mapping

    switch (attr->usage) {

    case AUDIO_USAGE_MEDIA:

    case AUDIO_USAGE_GAME:

    case AUDIO_USAGE_ASSISTANCE_ACCESSIBILITY:

    case AUDIO_USAGE_ASSISTANCE_NAVIGATION_GUIDANCE:

    case AUDIO_USAGE_ASSISTANCE_SONIFICATION:

        return (uint32_t) STRATEGY_MEDIA;

    case AUDIO_USAGE_VOICE_COMMUNICATION:

        return (uint32_t) STRATEGY_PHONE;

    case AUDIO_USAGE_VOICE_COMMUNICATION_SIGNALLING:

        return (uint32_t) STRATEGY_DTMF;

    case AUDIO_USAGE_ALARM:

    case AUDIO_USAGE_NOTIFICATION_TELEPHONY_RINGTONE:

        return (uint32_t) STRATEGY_SONIFICATION;

    case AUDIO_USAGE_NOTIFICATION:

    case AUDIO_USAGE_NOTIFICATION_COMMUNICATION_REQUEST:

    case AUDIO_USAGE_NOTIFICATION_COMMUNICATION_INSTANT:

    case AUDIO_USAGE_NOTIFICATION_COMMUNICATION_DELAYED:

    case AUDIO_USAGE_NOTIFICATION_EVENT:

        return (uint32_t) STRATEGY_SONIFICATION_RESPECTFUL;

    case AUDIO_USAGE_UNKNOWN:

    default:

        return (uint32_t) STRATEGY_MEDIA;

    }

}

uint32_t AudioPolicyManager::getStrategyForAttr(const audio_attributes_t *attr) {

    // flags to strategy mapping

    if ((attr->flags & AUDIO_FLAG_AUDIBILITY_ENFORCED) == AUDIO_FLAG_AUDIBILITY_ENFORCED) {

        return (uint32_t) STRATEGY_ENFORCED_AUDIBLE;

    }

    // usage to strategy mapping

    switch (attr->usage) {

    case AUDIO_USAGE_MEDIA:

    case AUDIO_USAGE_GAME:

    case AUDIO_USAGE_ASSISTANCE_ACCESSIBILITY:

    case AUDIO_USAGE_ASSISTANCE_NAVIGATION_GUIDANCE:

    case AUDIO_USAGE_ASSISTANCE_SONIFICATION:

        return (uint32_t) STRATEGY_MEDIA;

    case AUDIO_USAGE_VOICE_COMMUNICATION:

        return (uint32_t) STRATEGY_PHONE;

    case AUDIO_USAGE_VOICE_COMMUNICATION_SIGNALLING:

        return (uint32_t) STRATEGY_DTMF;

    case AUDIO_USAGE_ALARM:

    case AUDIO_USAGE_NOTIFICATION_TELEPHONY_RINGTONE:

        return (uint32_t) STRATEGY_SONIFICATION;

    case AUDIO_USAGE_NOTIFICATION:

    case AUDIO_USAGE_NOTIFICATION_COMMUNICATION_REQUEST:

    case AUDIO_USAGE_NOTIFICATION_COMMUNICATION_INSTANT:

    case AUDIO_USAGE_NOTIFICATION_COMMUNICATION_DELAYED:

    case AUDIO_USAGE_NOTIFICATION_EVENT:

        return (uint32_t) STRATEGY_SONIFICATION_RESPECTFUL;

    case AUDIO_USAGE_UNKNOWN:

    default:

        return (uint32_t) STRATEGY_MEDIA;

    }

}

audio_devices_t AudioPolicyManager::getDeviceForStrategy(routing_strategy strategy,

                                                             bool fromCache)

{

    uint32_t device = AUDIO_DEVICE_NONE;

    if (fromCache) {

        ALOGVV("getDeviceForStrategy() from cache strategy %d, device %x",

              strategy, mDeviceForStrategy[strategy]);

        return mDeviceForStrategy[strategy];

    }

    audio_devices_t availableOutputDeviceTypes = mAvailableOutputDevices.types();

    switch (strategy) {

    case STRATEGY_SONIFICATION_RESPECTFUL:

        if (isInCall()) {

            device = getDeviceForStrategy(STRATEGY_SONIFICATION, false /*fromCache*/);

        } else if (isStreamActiveRemotely(AUDIO_STREAM_MUSIC,

                SONIFICATION_RESPECTFUL_AFTER_MUSIC_DELAY)) {

            // while media is playing on a remote device, use the the sonification behavior.

            // Note that we test this usecase before testing if media is playing because

            //   the isStreamActive() method only informs about the activity of a stream, not

            //   if it's for local playback. Note also that we use the same delay between both tests

            device = getDeviceForStrategy(STRATEGY_SONIFICATION, false /*fromCache*/);

            //user "safe" speaker if available instead of normal speaker to avoid triggering

            //other acoustic safety mechanisms for notification

            if (device == AUDIO_DEVICE_OUT_SPEAKER && (availableOutputDeviceTypes & AUDIO_DEVICE_OUT_SPEAKER_SAFE))

                device = AUDIO_DEVICE_OUT_SPEAKER_SAFE;

        } else if (isStreamActive(AUDIO_STREAM_MUSIC, SONIFICATION_RESPECTFUL_AFTER_MUSIC_DELAY)) {

            // while media is playing (or has recently played), use the same device

            device = getDeviceForStrategy(STRATEGY_MEDIA, false /*fromCache*/);

        } else {

            // when media is not playing anymore, fall back on the sonification behavior

            device = getDeviceForStrategy(STRATEGY_SONIFICATION, false /*fromCache*/);

            //user "safe" speaker if available instead of normal speaker to avoid triggering

            //other acoustic safety mechanisms for notification

            if (device == AUDIO_DEVICE_OUT_SPEAKER && (availableOutputDeviceTypes & AUDIO_DEVICE_OUT_SPEAKER_SAFE))

                device = AUDIO_DEVICE_OUT_SPEAKER_SAFE;

        }

        break;

    case STRATEGY_DTMF:

        if (!isInCall()) {

            // when off call, DTMF strategy follows the same rules as MEDIA strategy

            device = getDeviceForStrategy(STRATEGY_MEDIA, false /*fromCache*/);

            break;

        }

        // when in call, DTMF and PHONE strategies follow the same rules

        // FALL THROUGH

    case STRATEGY_PHONE:

        // Force use of only devices on primary output if:

        // - in call AND

        //   - cannot route from voice call RX OR

        //   - audio HAL version is < 3.0 and TX device is on the primary HW module

        if (mPhoneState == AUDIO_MODE_IN_CALL) {

            audio_devices_t txDevice = getDeviceForInputSource(AUDIO_SOURCE_VOICE_COMMUNICATION);

            sp hwOutputDesc = mOutputs.valueFor(mPrimaryOutput);

            if (((mAvailableInputDevices.types() &

                    AUDIO_DEVICE_IN_TELEPHONY_RX & ~AUDIO_DEVICE_BIT_IN) == 0) ||

                    (((txDevice & availablePrimaryInputDevices() & ~AUDIO_DEVICE_BIT_IN) != 0) &&

                         (hwOutputDesc->getAudioPort()->mModule->mHalVersion <

                             AUDIO_DEVICE_API_VERSION_3_0))) {

                availableOutputDeviceTypes = availablePrimaryOutputDevices();

            }

        }

        // for phone strategy, we first consider the forced use and then the available devices by order

        // of priority

        switch (mForceUse[AUDIO_POLICY_FORCE_FOR_COMMUNICATION]) {

        case AUDIO_POLICY_FORCE_BT_SCO:

            if (!isInCall() || strategy != STRATEGY_DTMF) {

                device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_BLUETOOTH_SCO_CARKIT;

                if (device) break;

            }

            device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_BLUETOOTH_SCO_HEADSET;

            if (device) break;

            device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_BLUETOOTH_SCO;

            if (device) break;

            // if SCO device is requested but no SCO device is available, fall back to default case

            // FALL THROUGH

        default:    // FORCE_NONE

            // when not in a phone call, phone strategy should route STREAM_VOICE_CALL to A2DP

            if (!isInCall() &&

                    (mForceUse[AUDIO_POLICY_FORCE_FOR_MEDIA] != AUDIO_POLICY_FORCE_NO_BT_A2DP) &&

                    (getA2dpOutput() != 0) && !mA2dpSuspended) {

                device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_BLUETOOTH_A2DP;

                if (device) break;

                device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_BLUETOOTH_A2DP_HEADPHONES;

                if (device) break;

            }

            device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_WIRED_HEADPHONE;

            if (device) break;

            device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_WIRED_HEADSET;

            if (device) break;

            device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_USB_DEVICE;

            if (device) break;

            if (mPhoneState != AUDIO_MODE_IN_CALL) {

                device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_USB_ACCESSORY;

                if (device) break;

                device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_DGTL_DOCK_HEADSET;

                if (device) break;

                device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_AUX_DIGITAL;

                if (device) break;

                device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_ANLG_DOCK_HEADSET;

                if (device) break;

            }

            device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_EARPIECE;

            if (device) break;

            device = mDefaultOutputDevice->mDeviceType;

            if (device == AUDIO_DEVICE_NONE) {

                ALOGE("getDeviceForStrategy() no device found for STRATEGY_PHONE");

            }

            break;

        case AUDIO_POLICY_FORCE_SPEAKER:

            // when not in a phone call, phone strategy should route STREAM_VOICE_CALL to

            // A2DP speaker when forcing to speaker output

            if (!isInCall() &&

                    (mForceUse[AUDIO_POLICY_FORCE_FOR_MEDIA] != AUDIO_POLICY_FORCE_NO_BT_A2DP) &&

                    (getA2dpOutput() != 0) && !mA2dpSuspended) {

                device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_BLUETOOTH_A2DP_SPEAKER;

                if (device) break;

            }

            if (mPhoneState != AUDIO_MODE_IN_CALL) {

                device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_USB_ACCESSORY;

                if (device) break;

                device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_USB_DEVICE;

                if (device) break;

                device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_DGTL_DOCK_HEADSET;

                if (device) break;

                device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_AUX_DIGITAL;

                if (device) break;

                device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_ANLG_DOCK_HEADSET;

                if (device) break;

            }

            device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_LINE;

            if (device) break;

            device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_SPEAKER;

            if (device) break;

            device = mDefaultOutputDevice->mDeviceType;

            if (device == AUDIO_DEVICE_NONE) {

                ALOGE("getDeviceForStrategy() no device found for STRATEGY_PHONE, FORCE_SPEAKER");

            }

            break;

        }

    break;

    case STRATEGY_SONIFICATION:

        // If incall, just select the STRATEGY_PHONE device: The rest of the behavior is handled by

        // handleIncallSonification().

        if (isInCall()) {

            device = getDeviceForStrategy(STRATEGY_PHONE, false /*fromCache*/);

            break;

        }

        // FALL THROUGH

    case STRATEGY_ENFORCED_AUDIBLE:

        // strategy STRATEGY_ENFORCED_AUDIBLE uses same routing policy as STRATEGY_SONIFICATION

        // except:

        //   - when in call where it doesn't default to STRATEGY_PHONE behavior

        //   - in countries where not enforced in which case it follows STRATEGY_MEDIA

        if ((strategy == STRATEGY_SONIFICATION) ||

                (mForceUse[AUDIO_POLICY_FORCE_FOR_SYSTEM] == AUDIO_POLICY_FORCE_SYSTEM_ENFORCED)) {

            device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_SPEAKER;

            if (device == AUDIO_DEVICE_NONE) {

                ALOGE("getDeviceForStrategy() speaker device not found for STRATEGY_SONIFICATION");

            }

        }

        // The second device used for sonification is the same as the device used by media strategy

        // FALL THROUGH

    case STRATEGY_MEDIA: {

        uint32_t device2 = AUDIO_DEVICE_NONE;

        if (strategy != STRATEGY_SONIFICATION) {

            // no sonification on remote submix (e.g. WFD)

            device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_REMOTE_SUBMIX;

        }

        if ((device2 == AUDIO_DEVICE_NONE) &&

                (mForceUse[AUDIO_POLICY_FORCE_FOR_MEDIA] != AUDIO_POLICY_FORCE_NO_BT_A2DP) &&

                (getA2dpOutput() != 0) && !mA2dpSuspended) {

            device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_BLUETOOTH_A2DP;

            if (device2 == AUDIO_DEVICE_NONE) {

                device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_BLUETOOTH_A2DP_HEADPHONES;

            }

            if (device2 == AUDIO_DEVICE_NONE) {

                device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_BLUETOOTH_A2DP_SPEAKER;

            }

        }

        if (device2 == AUDIO_DEVICE_NONE) {

            device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_WIRED_HEADPHONE;

        }

        if ((device2 == AUDIO_DEVICE_NONE)) {

            device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_LINE;

        }

        if (device2 == AUDIO_DEVICE_NONE) {

            device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_WIRED_HEADSET;

        }

        if (device2 == AUDIO_DEVICE_NONE) {

            device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_USB_ACCESSORY;

        }

        if (device2 == AUDIO_DEVICE_NONE) {

            device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_USB_DEVICE;

        }

        if (device2 == AUDIO_DEVICE_NONE) {

            device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_DGTL_DOCK_HEADSET;

        }

        if ((device2 == AUDIO_DEVICE_NONE) && (strategy != STRATEGY_SONIFICATION)) {

            // no sonification on aux digital (e.g. HDMI)

            device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_AUX_DIGITAL;

        }

        if ((device2 == AUDIO_DEVICE_NONE) &&

                (mForceUse[AUDIO_POLICY_FORCE_FOR_DOCK] == AUDIO_POLICY_FORCE_ANALOG_DOCK)) {

            device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_ANLG_DOCK_HEADSET;

        }

        if (device2 == AUDIO_DEVICE_NONE) {

            device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_SPEAKER;

        }

        int device3 = AUDIO_DEVICE_NONE;

        if (strategy == STRATEGY_MEDIA) {

            // ARC, SPDIF and AUX_LINE can co-exist with others.

            device3 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_HDMI_ARC;

            device3 |= (availableOutputDeviceTypes & AUDIO_DEVICE_OUT_SPDIF);

            device3 |= (availableOutputDeviceTypes & AUDIO_DEVICE_OUT_AUX_LINE);

        }

        device2 |= device3;

        // device is DEVICE_OUT_SPEAKER if we come from case STRATEGY_SONIFICATION or

        // STRATEGY_ENFORCED_AUDIBLE, AUDIO_DEVICE_NONE otherwise

        device |= device2;

        // If hdmi system audio mode is on, remove speaker out of output list.

        if ((strategy == STRATEGY_MEDIA) &&

            (mForceUse[AUDIO_POLICY_FORCE_FOR_HDMI_SYSTEM_AUDIO] ==

                AUDIO_POLICY_FORCE_HDMI_SYSTEM_AUDIO_ENFORCED)) {

            device &= ~AUDIO_DEVICE_OUT_SPEAKER;

        }

        if (device) break;

        device = mDefaultOutputDevice->mDeviceType;

        if (device == AUDIO_DEVICE_NONE) {

            ALOGE("getDeviceForStrategy() no device found for STRATEGY_MEDIA");

        }

        } break;

    default:

        ALOGW("getDeviceForStrategy() unknown strategy: %d", strategy);

        break;

    }

    ALOGVV("getDeviceForStrategy() strategy %d, device %x", strategy, device);

    return device;

}

AudioPolicyManager.h (z:\android-5.0.2\frameworks\av\services\audiopolicy) 

audio_io_handle_t AudioPolicyManager::getOutputForDevice(

        audio_devices_t device,

        audio_stream_type_t stream,

        uint32_t samplingRate,

        audio_format_t format,

        audio_channel_mask_t channelMask,

        audio_output_flags_t flags,

        const audio_offload_info_t *offloadInfo)

{

    audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;

    uint32_t latency = 0;

    status_t status;

#ifdef AUDIO_POLICY_TEST

    if (mCurOutput != 0) {

        ALOGV("getOutput() test output mCurOutput %d, samplingRate %d, format %d, channelMask %x, mDirectOutput %d",

                mCurOutput, mTestSamplingRate, mTestFormat, mTestChannels, mDirectOutput);

        if (mTestOutputs[mCurOutput] == 0) {

            ALOGV("getOutput() opening test output");

            sp outputDesc = new AudioOutputDescriptor(NULL);

            outputDesc->mDevice = mTestDevice;

            outputDesc->mLatency = mTestLatencyMs;

            outputDesc->mFlags =

                    (audio_output_flags_t)(mDirectOutput ? AUDIO_OUTPUT_FLAG_DIRECT : 0);

            outputDesc->mRefCount[stream] = 0;

            audio_config_t config = AUDIO_CONFIG_INITIALIZER;

            config.sample_rate = mTestSamplingRate;

            config.channel_mask = mTestChannels;

            config.format = mTestFormat;

            if (offloadInfo != NULL) {

                config.offload_info = *offloadInfo;

            }

            status = mpClientInterface->openOutput(0,

                                                  &mTestOutputs[mCurOutput],

                                                  &config,

                                                  &outputDesc->mDevice,

                                                  String8(""),

                                                  &outputDesc->mLatency,

                                                  outputDesc->mFlags);

            if (status == NO_ERROR) {

                outputDesc->mSamplingRate = config.sample_rate;

                outputDesc->mFormat = config.format;

                outputDesc->mChannelMask = config.channel_mask;

                AudioParameter outputCmd = AudioParameter();

                outputCmd.addInt(String8("set_id"),mCurOutput);

                mpClientInterface->setParameters(mTestOutputs[mCurOutput],outputCmd.toString());

                addOutput(mTestOutputs[mCurOutput], outputDesc);

            }

        }

        return mTestOutputs[mCurOutput];

    }

#endif //AUDIO_POLICY_TEST

    // open a direct output if required by specified parameters

    //force direct flag if offload flag is set: offloading implies a direct output stream

    // and all common behaviors are driven by checking only the direct flag

    // this should normally be set appropriately in the policy configuration file

    if ((flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) != 0) {

        flags = (audio_output_flags_t)(flags | AUDIO_OUTPUT_FLAG_DIRECT);

    }

    if ((flags & AUDIO_OUTPUT_FLAG_HW_AV_SYNC) != 0) {

        flags = (audio_output_flags_t)(flags | AUDIO_OUTPUT_FLAG_DIRECT);

    }

    sp profile;

    // skip direct output selection if the request can obviously be attached to a mixed output

    // and not explicitly requested

    if (((flags & AUDIO_OUTPUT_FLAG_DIRECT) == 0) &&

            audio_is_linear_pcm(format) && samplingRate <= MAX_MIXER_SAMPLING_RATE &&

            audio_channel_count_from_out_mask(channelMask) <= 2) {

        goto non_direct_output;

    }

    // Do not allow offloading if one non offloadable effect is enabled. This prevents from

    // creating an offloaded track and tearing it down immediately after start when audioflinger

    // detects there is an active non offloadable effect.

    // FIXME: We should check the audio session here but we do not have it in this context.

    // This may prevent offloading in rare situations where effects are left active by apps

    // in the background.

    if (((flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) == 0) ||

            !isNonOffloadableEffectEnabled()) {

        profile = getProfileForDirectOutput(device,

                                           samplingRate,

                                           format,

                                           channelMask,

                                           (audio_output_flags_t)flags);

    }

    if (profile != 0) {

        sp outputDesc = NULL;

        for (size_t i = 0; i < mOutputs.size(); i++) {

            sp desc = mOutputs.valueAt(i);

            if (!desc->isDuplicated() && (profile == desc->mProfile)) {

                outputDesc = desc;

                // reuse direct output if currently open and configured with same parameters

                if ((samplingRate == outputDesc->mSamplingRate) &&

                        (format == outputDesc->mFormat) &&

                        (channelMask == outputDesc->mChannelMask)) {

                    outputDesc->mDirectOpenCount++;

                    ALOGV("getOutput() reusing direct output %d", mOutputs.keyAt(i));

                    return mOutputs.keyAt(i);

                }

            }

        }

        // close direct output if currently open and configured with different parameters

        if (outputDesc != NULL) {

            closeOutput(outputDesc->mIoHandle);

        }

        outputDesc = new AudioOutputDescriptor(profile);

        outputDesc->mDevice = device;

        outputDesc->mLatency = 0;

        outputDesc->mFlags =(audio_output_flags_t) (outputDesc->mFlags | flags);

        audio_config_t config = AUDIO_CONFIG_INITIALIZER;

        config.sample_rate = samplingRate;

        config.channel_mask = channelMask;

        config.format = format;

        if (offloadInfo != NULL) {

            config.offload_info = *offloadInfo;

        }

        status = mpClientInterface->openOutput(profile->mModule->mHandle,

                                               &output,

                                               &config,

                                               &outputDesc->mDevice,

                                               String8(""),

                                               &outputDesc->mLatency,

                                               outputDesc->mFlags);

        // only accept an output with the requested parameters

        if (status != NO_ERROR ||

            (samplingRate != 0 && samplingRate != config.sample_rate) ||

            (format != AUDIO_FORMAT_DEFAULT && format != config.format) ||

            (channelMask != 0 && channelMask != config.channel_mask)) {

            ALOGV("getOutput() failed opening direct output: output %d samplingRate %d %d,"

                    "format %d %d, channelMask %04x %04x", output, samplingRate,

                    outputDesc->mSamplingRate, format, outputDesc->mFormat, channelMask,

                    outputDesc->mChannelMask);

            if (output != AUDIO_IO_HANDLE_NONE) {

                mpClientInterface->closeOutput(output);

            }

            return AUDIO_IO_HANDLE_NONE;

        }

        outputDesc->mSamplingRate = config.sample_rate;

        outputDesc->mChannelMask = config.channel_mask;

        outputDesc->mFormat = config.format;

        outputDesc->mRefCount[stream] = 0;

        outputDesc->mStopTime[stream] = 0;

        outputDesc->mDirectOpenCount = 1;

        audio_io_handle_t srcOutput = getOutputForEffect();

        addOutput(output, outputDesc);

        audio_io_handle_t dstOutput = getOutputForEffect();

        if (dstOutput == output) {

            mpClientInterface->moveEffects(AUDIO_SESSION_OUTPUT_MIX, srcOutput, dstOutput);

        }

        mPreviousOutputs = mOutputs;

        ALOGV("getOutput() returns new direct output %d", output);

        mpClientInterface->onAudioPortListUpdate();

        return output;

    }

non_direct_output:

    // ignoring channel mask due to downmix capability in mixer

    // open a non direct output

    // for non direct outputs, only PCM is supported

    if (audio_is_linear_pcm(format)) {

        // get which output is suitable for the specified stream. The actual

        // routing change will happen when startOutput() will be called

        SortedVector outputs = getOutputsForDevice(device, mOutputs);

        // at this stage we should ignore the DIRECT flag as no direct output could be found earlier

        flags = (audio_output_flags_t)(flags & ~AUDIO_OUTPUT_FLAG_DIRECT);

        output = selectOutput(outputs, flags, format);

    }

    ALOGW_IF((output == 0), "getOutput() could not find output for stream %d, samplingRate %d,"

            "format %d, channels %x, flags %x", stream, samplingRate, format, channelMask, flags);

    ALOGV("getOutput() returns output %d", output);

    return output;

}

SortedVector AudioPolicyManager::getOutputsForDevice(audio_devices_t device,

                        DefaultKeyedVector > openOutputs)

{

    SortedVector outputs;

    ALOGVV("getOutputsForDevice() device %04x", device);

    for (size_t i = 0; i < openOutputs.size(); i++) {

        ALOGVV("output %d isDuplicated=%d device=%04x",

                i, openOutputs.valueAt(i)->isDuplicated(), openOutputs.valueAt(i)->supportedDevices());

        if ((device & openOutputs.valueAt(i)->supportedDevices()) == device) {

            ALOGVV("getOutputsForDevice() found output %d", openOutputs.keyAt(i));

            outputs.add(openOutputs.keyAt(i));

        }

    }

    return outputs;

}

发布了56 篇原创文章 · 获赞 53 · 访问量 10万+

猜你喜欢

转载自blog.csdn.net/weixin_42082222/article/details/104030513