Android音频子系统(一)------openOutput打开流程

你好!这里是风筝的博客,

欢迎和我一起交流。


Audio在Android也算是比较复杂的系统,我也是一边学习一边做笔记,如果有不对的地方可以在评论区指出。

这里以Android N为例

为了防止代码看花眼,这里先给出一个函数调用栈:

openOutput
	|- openOutput_l
		|- findSuitableHwDev_l
		|	|- loadHwModule_l(audio_interfaces[i])
		|	|	|- load_audio_interface
		|	|	|	|- hw_get_module_by_class
		|	|	|	|	|- load
		|	|	|	|- audio_hw_device_open
		|	|	|	|	|- module->methods->open
		|	|	|- nextUniqueId(AUDIO_UNIQUE_ID_USE_MODULE)
		|	|	|- mAudioHwDevs.add
		|- nextUniqueId(AUDIO_UNIQUE_ID_USE_OUTPUT)
		|- outHwDev->openOutputStream
		|- MixerThread
		|- mPlaybackThreads.add

最开始先从AudioFlinger::openOutput分析,AudioFlinger为audio Flinger服务的入口

status_t AudioFlinger::openOutput(audio_module_handle_t module,
                                  audio_io_handle_t *output,
                                  audio_config_t *config,
                                  audio_devices_t *devices,
                                  const String8& address,
                                  uint32_t *latencyMs,
                                  audio_output_flags_t flags)
{
    
    
	//这里的module一般由loadHwModule传入,它是一个audio interface的id号
	//通过这个id在mAudioHwDevs中找到对应的AudioHwDevices对象。
	sp<PlaybackThread> thread = openOutput_l(module, output, config, *devices, address, flags);
}

这里主要调用openOutput_l函数,里面主要围绕AudioHwDevice*outHwDev做一些操作:

sp<AudioFlinger::PlaybackThread> AudioFlinger::openOutput_l(audio_module_handle_t module,
                                                            audio_io_handle_t *output,
                                                            audio_config_t *config,
                                                            audio_devices_t devices,
                                                            const String8& address,
                                                            audio_output_flags_t flags)
{
    
    
	//AudioHwDevice 代表一个打开的音频接口设备
    AudioHwDevice *outHwDev = findSuitableHwDev_l(module, devices);

	if (*output == AUDIO_IO_HANDLE_NONE) {
    
    
        *output = nextUniqueId(AUDIO_UNIQUE_ID_USE_OUTPUT);
    } else {
    
    
        // Audio Policy does not currently request a specific output handle.
        // If this is ever needed, see openInput_l() for example code.
        ALOGE("openOutput_l requested output handle %d is not AUDIO_IO_HANDLE_NONE", *output);
        return 0;
    }

    AudioStreamOut *outputStream = NULL;
    //为设备打开一个输出流,会获得一个audio_stream_out_t *stream; 一个audio_devices_t devices,其中会生成一个AudioStreamOut(AudioStreamOut *outputStream = new AudioStreamOut(this, flags);)就是来封装audio_stream_out_t和audio_devices_t的。
    status_t status = outHwDev->openOutputStream(
            &outputStream,
            *output,
            devices,
            flags,
            config,
            address.string());

	mHardwareStatus = AUDIO_HW_IDLE;

    if (status == NO_ERROR) {
    
    
		//创建播放线程
        PlaybackThread *thread;
        if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
    
    
            thread = new OffloadThread(this, outputStream, *output, devices, mSystemReady);
            ALOGV("openOutput_l() created offload output: ID %d thread %p", *output, thread);
        } else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)
                || !isValidPcmSinkFormat(config->format)
                || !isValidPcmSinkChannelMask(config->channel_mask)) {
    
    
            thread = new DirectOutputThread(this, outputStream, *output, devices, mSystemReady);
            ALOGV("openOutput_l() created direct output: ID %d thread %p", *output, thread);
        } else {
    
    
            thread = new MixerThread(this, outputStream, *output, devices, mSystemReady);
            ALOGV("openOutput_l() created mixer output: ID %d thread %p", *output, thread);
        }
        //添加到mPlaybackThreads中
        mPlaybackThreads.add(*output, thread);
        return thread;
    }

}

AudioHwDevice 有一个成员变量audio_hw_device_t* const mHwDevice;数据类型audio_hw_device_t包含了一个音频接口设备所具有的属性集合。

所以往下追AudioHwDevice *outHwDev = findSuitableHwDev_l(module, devices):

AudioHwDevice* AudioFlinger::findSuitableHwDev_l(
        audio_module_handle_t module,
        audio_devices_t devices)
{
    
    
	//module等于0,首先加载所有已知的音频接口设备,这个加载最终还是
	//调用audioflinger的loadHwModule方法实现;然后根据devices确定符合要求的设备
    if (module == 0) {
    
    
        ALOGW("findSuitableHwDev_l() loading well know audio hw modules");
        //打开audio_interfaces数组定义的所有音频设备 
        for (size_t i = 0; i < ARRAY_SIZE(audio_interfaces); i++) {
    
    
            loadHwModule_l(audio_interfaces[i]);
        }
        // then try to find a module supporting the requested device.
        for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
    
    
            AudioHwDevice *audioHwDevice = mAudioHwDevs.valueAt(i);
            audio_hw_device_t *dev = audioHwDevice->hwDevice();
            if ((dev->get_supported_devices != NULL) &&
                    (dev->get_supported_devices(dev) & devices) == devices)
                return audioHwDevice;
        }
    } else {
    
    
    	//module非0时,说明audiopolicy指定了具体的设备id,查找mAudioHwDevs变量确认符合要求的设备
        // check a match for the requested module handle
        AudioHwDevice *audioHwDevice = mAudioHwDevs.valueFor(module);
        if (audioHwDevice != NULL) {
    
    
            return audioHwDevice;
        }
    }

    return NULL;
}

这里要注意了,如果module传入的是0,那么就加载已知的音频模块(在数组audio_interfaces里),传入audio_interfaces函数:

#define AUDIO_HARDWARE_MODULE_ID_PRIMARY "primary"
#define AUDIO_HARDWARE_MODULE_ID_A2DP "a2dp"
#define AUDIO_HARDWARE_MODULE_ID_USB "usb"
#define AUDIO_HARDWARE_MODULE_ID_REMOTE_SUBMIX "r_submix"
#define AUDIO_HARDWARE_MODULE_ID_CODEC_OFFLOAD "codec_offload"
#define AUDIO_HARDWARE_MODULE_ID_STUB "stub"
static const char * const audio_interfaces[] = {
    
    
    AUDIO_HARDWARE_MODULE_ID_PRIMARY,//指本机中的codec  
    AUDIO_HARDWARE_MODULE_ID_A2DP,//a2dp设备,蓝牙高保真音频 
    AUDIO_HARDWARE_MODULE_ID_USB,//usb-audio设备 
};

audio_module_handle_t AudioFlinger::loadHwModule_l(const char *name)
{
    
    
	//遍历mAudioHwDevs,查找已有的mAudioHwDevs是否有module
    for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
    
    
        if (strncmp(mAudioHwDevs.valueAt(i)->moduleName(), name, strlen(name)) == 0) {
    
    
            ALOGW("loadHwModule() module %s already loaded", name);
            return mAudioHwDevs.keyAt(i);
        }
    }

    audio_hw_device_t *dev;

    int rc = load_audio_interface(name, &dev);
    if (rc) {
    
    
        ALOGE("loadHwModule() error %d loading module %s", rc, name);
        return AUDIO_MODULE_HANDLE_NONE;
    }

    mHardwareStatus = AUDIO_HW_INIT;
    //检查资源初始化状态
    rc = dev->init_check(dev);
    mHardwareStatus = AUDIO_HW_IDLE;
    
    AudioHwDevice::Flags flags = static_cast<AudioHwDevice::Flags>(0);
    {
    
      // scope for auto-lock pattern
        AutoMutex lock(mHardwareLock);

        if (0 == mAudioHwDevs.size()) {
    
    
            mHardwareStatus = AUDIO_HW_GET_MASTER_VOLUME;
            if (NULL != dev->get_master_volume) {
    
    
                float mv;
                //获取音量
                if (OK == dev->get_master_volume(dev, &mv)) {
    
    
                    mMasterVolume = mv;
                }
            }

            mHardwareStatus = AUDIO_HW_GET_MASTER_MUTE;
            if (NULL != dev->get_master_mute) {
    
    
                bool mm;
                //获取mute状态
                if (OK == dev->get_master_mute(dev, &mm)) {
    
    
                    mMasterMute = mm;
                }
            }
        }

        mHardwareStatus = AUDIO_HW_SET_MASTER_VOLUME;
        //设置音量
        if ((NULL != dev->set_master_volume) &&
            (OK == dev->set_master_volume(dev, mMasterVolume))) {
    
    
            flags = static_cast<AudioHwDevice::Flags>(flags |
                    AudioHwDevice::AHWD_CAN_SET_MASTER_VOLUME);
        }

        mHardwareStatus = AUDIO_HW_SET_MASTER_MUTE;
        //设置mute状态
        if ((NULL != dev->set_master_mute) &&
            (OK == dev->set_master_mute(dev, mMasterMute))) {
    
    
            flags = static_cast<AudioHwDevice::Flags>(flags |
                    AudioHwDevice::AHWD_CAN_SET_MASTER_MUTE);
        }

        mHardwareStatus = AUDIO_HW_IDLE;
    }

	//生成唯一handle标识
    audio_module_handle_t handle = (audio_module_handle_t) nextUniqueId(AUDIO_UNIQUE_ID_USE_MODULE);
    //handle和AudioHwDevice是一对键值对,一一对应,
    //把这个设备加入到mAudioHwDevs集合中,通过audio_module_handle_t类型变量handle,可以获得硬件设备
    mAudioHwDevs.add(handle, new AudioHwDevice(handle, name, dev, flags));
}

函数里基本流程:
[1] 通过strncmp判断name是否在mAudioHwDevs中存在
[2] 通过底层对象dev,检查资源初始化状态
[3] 设置底层设备的音量、mute等属性
[4] 生成唯一handle标识,将AudioHwDevice对象加入到mAudioHwDevs集合中
通过上述流程梳理,我们可知:
dev与handle进行绑定。我们可以通过handle在mAudioHwDevs集合中获取AudioHwDevice,然后再获得dev.或者通过name直接获得handle,然后再获得AudioHwDevice进而获得dev,这样就可以通过dev来对底层资源进行操作。

static int load_audio_interface(const char *if_name, audio_hw_device_t **dev)
{
    
    
    const hw_module_t *mod;
    int rc;

	//获取module
	/* 这里加载的是音频动态库,如audio.primary.xxxx.so,如何加载会独立体现 */
    rc = hw_get_module_by_class(AUDIO_HARDWARE_MODULE_ID, if_name, &mod);
    ALOGE_IF(rc, "%s couldn't load audio hw module %s.%s (%s)", __func__,
                 AUDIO_HARDWARE_MODULE_ID, if_name, strerror(-rc));
    if (rc) {
    
    
        goto out;
    }
    //打开设备,加载好的动态库模块必有个open方法,调用open方法打开音频设备模块  
    rc = audio_hw_device_open(mod, dev);
    ALOGE_IF(rc, "%s couldn't open audio hw device in %s.%s (%s)", __func__,
                 AUDIO_HARDWARE_MODULE_ID, if_name, strerror(-rc));
    if (rc) {
    
    
        goto out;
    }
    if ((*dev)->common.version < AUDIO_DEVICE_API_VERSION_MIN) {
    
    
        ALOGE("%s wrong audio hw device version %04x", __func__, (*dev)->common.version);
        rc = BAD_VALUE;
        goto out;
    }
    return 0;

out:
    *dev = NULL;
    return rc;
}

具体调用堆栈如下:

static const char *variant_keys[] = {
    
    
    "ro.hardware",  /* This goes first so that it can pick up a different
                       file on the emulator. */
    "ro.product.board",
    "ro.board.platform",
    "ro.arch"
};

int hw_get_module_by_class(const char *class_id, const char *inst,
                           const struct hw_module_t **module)
{
    
    
	//根据名字查找库文件
 	snprintf(prop_name, sizeof(prop_name), "ro.hardware.%s", name);
    if (property_get(prop_name, prop, NULL) > 0) {
    
    
        if (hw_module_exists(path, sizeof(path), name, prop) == 0) {
    
    
            goto found;
        }
    }
    //如果没有,则查找variant_keys数组里配置的库文件
    for (i=0 ; i<HAL_VARIANT_KEYS_COUNT; i++) {
    
    
        if (property_get(variant_keys[i], prop, NULL) == 0) {
    
    
            continue;
        }
        if (hw_module_exists(path, sizeof(path), name, prop) == 0) {
    
    
            goto found;
        }
    }
	//如果还是没找到,即加载default配置
    if (hw_module_exists(path, sizeof(path), name, "default") == 0) {
    
    
        goto found;
    }

    return -ENOENT;

found:
    /* load the module, if this fails, we're doomed, and we should not try
     * to load a different variant. */
    return load(class_id, path, module);
}

audio_hw_device_open:

//打开设备
static inline int audio_hw_device_open(const struct hw_module_t* module,
                                       struct audio_hw_device** device)
{
    
    
    return module->methods->open(module, AUDIO_HARDWARE_INTERFACE,
                                 TO_HW_DEVICE_T_OPEN(device));
}


//@audio_hal.c
static struct hw_module_methods_t hal_module_methods = {
    
    
    .open = adev_open,
};

所以可知,audio_hw_device_open会调用刚刚加载的lib中的adev_open函数,也就是最终会打开hal层的 open函数。
我们可以得知一个 结论:AudioHwDevice为底层hal的上层抽象,一个AudioHwDevice就对应着一个hal.

openOutput
	|- openOutput_l
		|- findSuitableHwDev_l
		|	|- loadHwModule_l(audio_interfaces[i])
		|	|	|- load_audio_interface
		|	|	|	|- hw_get_module_by_class
		|	|	|	|	|- load
		|	|	|	|- audio_hw_device_open
		|	|	|	|	|- module->methods->open
		|	|	|- nextUniqueId(AUDIO_UNIQUE_ID_USE_MODULE)
		|	|	|- mAudioHwDevs.add
		|- nextUniqueId(AUDIO_UNIQUE_ID_USE_OUTPUT)
		|- outHwDev->openOutputStream
		|- MixerThread
		|- mPlaybackThreads.add

findSuitableHwDev_l分析到此结束,往下看下openOutputStream

status_t AudioHwDevice::openOutputStream(
        AudioStreamOut **ppStreamOut,
        audio_io_handle_t handle,
        audio_devices_t devices,
        audio_output_flags_t flags,
        struct audio_config *config,
        const char *address)
{
    
    
	struct audio_config originalConfig = *config;
	//创建AudioStreamOut音频输出流
    AudioStreamOut *outputStream = new AudioStreamOut(this, flags);

	status_t status = outputStream->open(handle, devices, config, address);

	//创建好的outputStream赋给传进来的参数ppStreamOut
	*ppStreamOut = outputStream;
    return status;
}

继续追下open函数

//@audio.h
typedef struct audio_hw_device audio_hw_device_t;
//@AudioStreamOut.cpp
audio_hw_device_t *AudioStreamOut::hwDev() const
{
    
    
    return audioHwDev->hwDevice();
}
//@AudioStreamOut.cpp
status_t AudioStreamOut::open(
        audio_io_handle_t handle,
        audio_devices_t devices,
        struct audio_config *config,
        const char *address)
{
    
    
    audio_stream_out_t *outStream;
	//hal层的open_output_stream函数
    int status = hwDev()->open_output_stream(
            hwDev(),
            handle,
            devices,
            customFlags,
            config,
            &outStream,
            address);
}

open_output_stream则是在hal层中。
最后的流程则是创建mix线程:

AudioFlinger::MixerThread::MixerThread(const sp<AudioFlinger>& audioFlinger, AudioStreamOut* output,
        audio_io_handle_t id, audio_devices_t device, bool systemReady, type_t type)
    :   PlaybackThread(audioFlinger, output, id, device, type, systemReady),
        // mAudioMixer below
        // mFastMixer below
        mFastMixerFutex(0),
        mMasterMono(false)
        // mOutputSink below
        // mPipeSink below
        // mNormalSink below
{
    
    
	mAudioMixer = new AudioMixer(mNormalFrameCount, mSampleRate);

    if (type == DUPLICATING) {
    
    
        // The Duplicating thread uses the AudioMixer and delivers data to OutputTracks
        // (downstream MixerThreads) in DuplicatingThread::threadLoop_write().
        // Do not create or use mFastMixer, mOutputSink, mPipeSink, or mNormalSink.
        return;
    }
    // create an NBAIO sink for the HAL output stream, and negotiate
    mOutputSink = new AudioStreamOutSink(output->stream);

	// create fast mixer and configure it initially with just one fast track for our submix
    mFastMixer = new FastMixer();
}

后续关于MixerThread这类播放线程,可以查看这篇文章:

不难看出,openOutput实际就是底层资源根据设备以及设备配置创建输出数据流,每一个数据数据流都与一个线程进行绑定,并以outputID作为唯一标识

那么根据以上内容,我们就可以了解了在audioFlinger对底层audio hal的基本操作模式。
也由此可以推断出上层对数据流的相关操作大概时序为下:
1、先通过openoutput创建数据流
2、通过某种方式获得数据流的唯一标识outputID
3、通过outputID找到数据流对应的线程
4、然后将要写入到输出流的音频数据与线程建立联系,最终将数据写入到hal 层进而写到实际物理设备之中


参考:
Android Audio 架构自学笔记(三) audio Flinger 基本概述

猜你喜欢

转载自blog.csdn.net/Guet_Kite/article/details/113825761