Android Framework audio subsystem (12) HAL layer analysis

This series of articles Master link: Thematic sub-directory Android Framework Class Audio Subsystem


Summary and description of key points in this chapter:

This chapter focuses on ➕ The HAL layer analysis part on the upper left of the above mind map is enough. It mainly explains the framework analysis of the HAL layer, and then analyzes the process of reading the audio data, writing the audio data, setting the parameter, and obtaining the parameter through the source code to deeply understand the calling process of the HAL layer.


1 HAL layer framework analysis

The entire frame diagram of the audio system is as follows:

@ 1 Two parts of the HAL layer

Here we pay attention to the HAL layer. There are HAL for audio and HAL for Audio_policy (we do n’t care about HAL for Audio_policy and basically discard it). The next layer of the HAL layer uses TiniAlsa (AlSA library cropped version). The HAL layer is divided into two parts:

  1. One part is a variety of audio devices, and each audio device is implemented by an independent library file: such as audio.a2dp.default.so (manage Bluetooth a2dp audio), audio.usb.default.so (manage usb external audio), audio .primary.default.so (manage most of the audio on the device).
  2. Part of the audio strategy implemented by the manufacturer itself is audio.primary.tiny4412.so.

@ 2 Key classes and structures

Generally speaking, for convenience, HAL should provide a unified interface to the upper layer, and the operating hardware will also have a set of interfaces / a class. They are:

  • Provide the interface struct audio_hw_device upwards: The struct audio_hw_device interface is in the audio_hw_hal.cpp file, which encapsulates the hw_device_t structure. audio_hw_hal.cpp is located under hardware / libhardware_legacy / audio (the new architecture has an audio_hw.c file under hardware / libhardware / modules / audio, it is the new Audio HAL file, but it is not used in the 5.0 board, it is inside All functions are empty. This file is also not implemented, so libhardware_legacy still plays the main role).
  • Go down to the hardware class AudioHardware: generally implemented in device / "platform manufacturer" /common/libaudio/audioHardware.cpp, provided by the manufacturer, which uses the tinyalsa library interface. The manufacturer implements the hardware access interface, and Android specifies a set of interfaces for it. The inheritance relationship of several key classes is:
AudioHardwareInterface   //hardware/AudioHardwareInterface.cpp 最基础 接口类
    ↑ (继承)
AudioHardwareBase        //hardware/AudioHardwareBase.h 这个应该是Audio HAL给厂商定义的接口
    ↑ (继承)
AudioHardware            //device/"平台厂商"/common/libaudio/audioHardware.cpp 厂商的实现

In the manufacturer's HAL, AudioHardware (in audioHardware.cpp) represents a sound card, which uses audio_stream_out structure to represent output and audio_stream_in to represent input. (There is write () in audio_stream_out and read () in audio_stream_in).

@ 3 HAL related data structure summary:

/* 上下衔接
 * Audio HAL的调用流程总结上层应用程序调用
 * audio_hw_hal.cpp中的legacy_adev_open()
 * 会得到一个struct audio_hw_device结构体,
 * 这个结构体代表上层使用硬件的接口,这个
 * 结构体中的函数都依赖于厂家提供的
 * struct AudioHardwareInterface结构。
 */
struct legacy_audio_device {
    struct audio_hw_device device;      //规范了向上提供的接口
    struct AudioHardwareInterface *hwif;//向下访问硬件,指向厂家的AudioHardware
};

/*由于HAL对上层直接提供的接口中没有read/write函数
 *因此,应用程序想要录音或播放声音的时候需要先打开
 *output或input(audio_hw_device中的open_output_stream/open_input_stream),
 *进而使用其里面的write/read函数通过声卡硬件读/写音频数据。
 *这也就是audio_hw_device与audio_stream_out/audio_stream_in之间的关系
 */
struct legacy_stream_out {
    struct audio_stream_out stream; //规范了向上提供的接口
    AudioStreamOut *legacy_out;     //向下访问硬件,指向厂家的AudioStreamOutALSA
};

struct legacy_stream_in {
    struct audio_stream_in stream;//规范了向上提供的接口
    AudioStreamIn *legacy_in;     //向下访问硬件,指向厂家的AudioStreamInALSA
};

2 HAL layer source code analysis

Next, we mainly analyze several key processes (write data operation, read data operation, obtain parameters, set parameters) to interpret the calling process of HAL.

The process of AudioFlinger loading library is as follows:

AudioFlinger::loadHwModule
->AudioFlinger::loadHwModule_l
-->load_audio_interface
--->audio_hw_device_open(mod, dev);//获取audio_hw_device_t结构体以及它的操作
---->module->methods->open //这里对应的是legacy_adev_open

Here, the legacy_adev_open method of the HAL layer is called, and the code is implemented as follows:

static int legacy_adev_open(const hw_module_t* module, const char* name,
                            hw_device_t** device)
{
    struct legacy_audio_device *ladev;
    int ret;

    if (strcmp(name, AUDIO_HARDWARE_INTERFACE) != 0)
        return -EINVAL;

    ladev = (struct legacy_audio_device *)calloc(1, sizeof(*ladev));
    if (!ladev)
        return -ENOMEM;

    //结构体赋值
    ladev->device.common.tag = HARDWARE_DEVICE_TAG;
    ladev->device.common.version = AUDIO_DEVICE_API_VERSION_2_0;
    ladev->device.common.module = const_cast<hw_module_t*>(module);
    ladev->device.common.close = legacy_adev_close;
    ladev->device.init_check = adev_init_check;
    ladev->device.set_voice_volume = adev_set_voice_volume;
    ladev->device.set_master_volume = adev_set_master_volume;
    ladev->device.get_master_volume = adev_get_master_volume;
    ladev->device.set_mode = adev_set_mode;
    ladev->device.set_mic_mute = adev_set_mic_mute;
    ladev->device.get_mic_mute = adev_get_mic_mute;
    ladev->device.set_parameters = adev_set_parameters;
    ladev->device.get_parameters = adev_get_parameters;
    ladev->device.get_input_buffer_size = adev_get_input_buffer_size;
    ladev->device.open_output_stream = adev_open_output_stream;
    ladev->device.close_output_stream = adev_close_output_stream;
    ladev->device.open_input_stream = adev_open_input_stream;
    ladev->device.close_input_stream = adev_close_input_stream;
    ladev->device.dump = adev_dump;
    /* 关键点:
     * audio_hw_device_t结构体 和 hwif(hardwareInterface)接口之间建立联系
     * 这里通过createAudioHardware 获取 实现hardwareInterface接口的厂商指针
     * 后面调用hwif的相关操作 <=等价=> 使用厂商的库函数中的方法
     */
    ladev->hwif = createAudioHardware();
    if (!ladev->hwif) {
        ret = -EIO;
        goto err_create_audio_hw;
    }
    *device = &ladev->device.common;
    return 0;

err_create_audio_hw:
    free(ladev);
    return ret;
}

Through the above analysis, from the Framework Native layer to the HAL layer framework, and then to the call of the third-party platform vendor library, the relationship between them is broken.

2.1 Write data operation

Starting from the Native layer of the Framework, the audio_hw_device_t structure will first obtain audio_stream_out through adev_open_output_stream, so start analyzing from it. code show as below:

static int adev_open_output_stream(struct audio_hw_device *dev,
                                   audio_io_handle_t handle,
                                   audio_devices_t devices,
                                   audio_output_flags_t flags,
                                   struct audio_config *config,
                                   struct audio_stream_out **stream_out,
                                   const char *address __unused)
{
    struct legacy_audio_device *ladev = to_ladev(dev);
    status_t status;
    struct legacy_stream_out *out;
    int ret;

    //这里的legacy_stream_out <=结构体类型 等价=>audio_stream_out
    out = (struct legacy_stream_out *)calloc(1, sizeof(*out));
    if (!out)
        return -ENOMEM;

    devices = convert_audio_device(devices, HAL_API_REV_2_0, HAL_API_REV_1_0);
    //这里将audio_stream_out与ladev->hwif之间建立联系
    out->legacy_out = ladev->hwif->openOutputStreamWithFlags(devices, flags,
                                                    (int *) &config->format,
                                                    &config->channel_mask,
                                                    &config->sample_rate, &status);
    if (!out->legacy_out) {
        ret = status;
        goto err_open;
    }

    out->stream.common.get_sample_rate = out_get_sample_rate;
    out->stream.common.set_sample_rate = out_set_sample_rate;
    out->stream.common.get_buffer_size = out_get_buffer_size;
    out->stream.common.get_channels = out_get_channels;
    out->stream.common.get_format = out_get_format;
    out->stream.common.set_format = out_set_format;
    out->stream.common.standby = out_standby;
    out->stream.common.dump = out_dump;
    out->stream.common.set_parameters = out_set_parameters;
    out->stream.common.get_parameters = out_get_parameters;
    out->stream.common.add_audio_effect = out_add_audio_effect;
    out->stream.common.remove_audio_effect = out_remove_audio_effect;
    out->stream.get_latency = out_get_latency;
    out->stream.set_volume = out_set_volume;
    out->stream.write = out_write;
    out->stream.get_render_position = out_get_render_position;
    out->stream.get_next_write_timestamp = out_get_next_write_timestamp;

    //将out->stream写回到参数 stream_out中
    *stream_out = &out->stream;
    return 0;

err_open:
    free(out);
    *stream_out = NULL;
    return ret;
}

Here we obtain the audio output stream (audio_stream_out type) through hwif, and then initialize out-> stream, register the write function out_write and return it to the pointer variable stream_out (audio_stream_out type) passed in, when the upper layer writes , The out_write function will be executed, the code is as follows:

static ssize_t out_write(struct audio_stream_out *stream, const void* buffer,
                         size_t bytes)
{
    struct legacy_stream_out *out = reinterpret_cast<struct legacy_stream_out *>(stream);
    return out->legacy_out->write(buffer, bytes);
}

Here directly call the write (out-> legacy_out-> write) method of the third-party platform vendor library (if this is a Qualcomm platform, the so-called hwif is AudioHardwareALSA, the so-called write is the write method of AudioStreamOutALSA, and will eventually bring pcm_write Write data operation).

2.2 Reading data

Starting from the Native layer of the Framework, the audio_hw_device_t structure will first obtain audio_stream_in through adev_open_Input_stream, so start analyzing from it. code show as below:


/** This method creates and opens the audio hardware input stream */
static int adev_open_input_stream(struct audio_hw_device *dev,
                                  audio_io_handle_t handle,
                                  audio_devices_t devices,
                                  struct audio_config *config,
                                  struct audio_stream_in **stream_in,
                                  audio_input_flags_t flags __unused,
                                  const char *address __unused,
                                  audio_source_t source __unused)
{
    struct legacy_audio_device *ladev = to_ladev(dev);
    status_t status;
    struct legacy_stream_in *in;
    int ret;
	//这里的legacy_stream_in <=结构体类型 等价=>audio_stream_in
    in = (struct legacy_stream_in *)calloc(1, sizeof(*in));
    if (!in)
        return -ENOMEM;

    devices = convert_audio_device(devices, HAL_API_REV_2_0, HAL_API_REV_1_0);
	//这里将audio_stream_in与ladev->hwif之间建立联系
    in->legacy_in = ladev->hwif->openInputStream(devices, (int *) &config->format,
                                                 &config->channel_mask, &config->sample_rate,
                                                 &status, (AudioSystem::audio_in_acoustics)0);
    if (!in->legacy_in) {
        ret = status;
        goto err_open;
    }

    in->stream.common.get_sample_rate = in_get_sample_rate;
    in->stream.common.set_sample_rate = in_set_sample_rate;
    in->stream.common.get_buffer_size = in_get_buffer_size;
    in->stream.common.get_channels = in_get_channels;
    in->stream.common.get_format = in_get_format;
    in->stream.common.set_format = in_set_format;
    in->stream.common.standby = in_standby;
    in->stream.common.dump = in_dump;
    in->stream.common.set_parameters = in_set_parameters;
    in->stream.common.get_parameters = in_get_parameters;
    in->stream.common.add_audio_effect = in_add_audio_effect;
    in->stream.common.remove_audio_effect = in_remove_audio_effect;
    in->stream.set_gain = in_set_gain;
    in->stream.read = in_read;
    in->stream.get_input_frames_lost = in_get_input_frames_lost;
	//将in->stream写回到参数 stream_in中
    *stream_in = &in->stream;
    return 0;

err_open:
    free(in);
    *stream_in = NULL;
    return ret;
}

Here, the audio input stream (audio_stream_in type) is obtained through hwif, then in-> stream is initialized, and the write function in_read is registered and returned to the pointer variable stream_in (audio_stream_in type) passed in. When the upper layer performs a write operation , Will execute this in_read function, the code is as follows:

static ssize_t in_read(struct audio_stream_in *stream, void* buffer,
                       size_t bytes)
{
    struct legacy_stream_in *in =
        reinterpret_cast<struct legacy_stream_in *>(stream);
    return in->legacy_in->read(buffer, bytes);
}

Here directly call the read (out-> legacy_in-> read) method of the third-party platform vendor library (if this is a Qualcomm platform, the so-called hwif is AudioHardwareALSA, the so-called read is the read method of AudioStreamInALSA, and will eventually bring pcm_read Read data operation).

2.3 Get parameters

Starting from the Native layer of the Framework, the operation of obtaining the parameters of the audio_hw_device_t structure will eventually be called to adev_get_parameters, so it starts analysis. code show as below:

static char * adev_get_parameters(const struct audio_hw_device *dev,
                                  const char *keys)
{
    const struct legacy_audio_device *ladev = to_cladev(dev);
    String8 s8;

    s8 = ladev->hwif->getParameters(String8(keys));
    return strdup(s8.string());
}

Here directly call the getParameters method of the third-party platform vendor library.

2.4 Setting parameters

Starting from the Native layer of the Framework, the operation of setting the parameters of the audio_hw_device_t structure will eventually be called to adev_set_parameters, so start analyzing from it. code show as below:

static int adev_set_parameters(struct audio_hw_device *dev, const char *kvpairs)
{
    struct legacy_audio_device *ladev = to_ladev(dev);
    return ladev->hwif->setParameters(String8(kvpairs));
}

Here directly call the setParameters method of the third-party platform vendor library.

2.5 Process summary

  1. Determine the name of the library file from the configuration file. The HAL file is generally located under / system / lib / hardware, and the name of the library file that operates the hardware in the audio is specified in (system manufacturer's HAL part) in / system / etc / policy_config.
  2. Load the library (* .so) file. Open the open function in the HAL file, and the audio_hw_device structure will be constructed in the HAL. There are various functions in this structure, especially open_output_stream / open_input_stream. AudioFlinger constructs an AudioHwDev object according to the audio_hw_device structure and puts it into mAudioHwDevs.
  3. Call open_input_stream / open_output_stream of HAL structure audio_hw_device, it will construct audio_stream_in / audio_stream_out structure.
  4. Write / read data, call some operations of tinyALSA, directly use system call to control sound card is tinyalsa library, located in directory / external / tinyalsa, compile and generate library file libtinyalsa.so (only involves two files mixer.c, pcm .c), compiling and generating tools tinycap, tinymix, tinypcminfo, tinyplay, which can be used to directly control the audio channel for recording and broadcasting test. Use the pcm_XXX operation to operate the sound card, which is the encapsulation of the driver layer.
  5. The tinyALSA library then operates the audio driver, and the audio driver then operates the hardware sound card device.
Published 290 original articles · praised 47 · 30,000+ views

Guess you like

Origin blog.csdn.net/vviccc/article/details/105417542