Android audio chapter (1) ------Audio architecture


foreword

My current job is to develop the underlying driver, and now I am learning the underlying driver of audio. I want to use my usual free time to make a summary of my learning.


1. What is the difference between Android and Linux?

Android is inherited from Linux. Android runs on the basis of the Linux kernel. It provides core system services including security, memory management, process management, network group and driver model. However, Android is not strictly speaking a Linux operating system. Android is an operating system developed based on the Linux kernel. Android provides drivers and user programming interfaces on the basis of Linux.

On the Android system, in order to avoid the specific requirements of the Linux system for software open source, the traditional device driver is split into a driver and a user space driver (HAL). Through this design, some hardware manufacturers can implement the design functions with independent property rights in the HAL layer, and only need to provide interfaces and dynamic libraries to the outside, without having to open source the source code. With the gradual development of the Android system, this method has become mainstream on the Android system. (Some commercial Linux system projects have also borrowed this part).
For user programs, the Android system has designed its own framework. By using this framework, application developers do not need to focus on the particularly complex logic design of the underlying layer, but only need to focus on their own application business. As shown in the figure below, the HAL layer provides an interface to call up the underlying driver of the kernel, and then the Framwork layer realizes the control of the HAL layer.
insert image description here
As shown in the figure above, Android is divided into the following layers:

  • Application Framework: Application frameworks are most commonly used by application developers. As a hardware developer, you should be very familiar with the developer APIs, as many of these APIs map directly to the underlying HAL interface and provide useful information related to implementing drivers. This is written in Java program
  • Binder IPC: The Binder inter-process communication (IPC) mechanism allows the application framework to cross process boundaries and call Android system service code, which enables high-level framework APIs to interact with Android system services. At the application framework level, developers cannot see the process of such communication, but everything seems to be "just working".
  • System Services: A system service is a modular component that focuses on a specific function, such as a window manager, search service, or notification manager. The functionality provided by the Application Framework API communicates with system services to access the underlying hardware. Android consists of two sets of services: "system" (services such as the window manager and notification manager) and "media" (services involved in playing and recording media).
  • Hardware Abstraction Layer (HAL): A HAL defines a standard interface for hardware vendors to implement, which allows Android to ignore lower-level driver implementations. With HAL, you can smoothly implement related functions without affecting or changing the higher-level system. The HAL implementation will be packaged into a module and loaded by the Android system in due course. See Hardware Abstraction Layer (HAL) for details. This is written in C/C++ program.
  • Linux Kernel: Developing device drivers is similar to developing typical Linux device drivers. The version of the Linux kernel used by Android includes special additions such as the low memory termination daemon (a memory management system that reserves memory more aggressively), wake locks (a PowerManager system service), Binder IPC drivers, and support for Other features that are important for mobile embedded platforms. These supplementary functions are mainly used to enhance system functions and will not affect driver development. You can use any version of the kernel as long as it supports the required features (such as binder drivers). However, we recommend that you use the latest version of the Android kernel. For more information, see the Building the kernel article. All written in C program.

Two, Audio structure

1. Audio audio subsystem architecture diagram

The system architecture diagram is as follows:
insert image description here


The software architecture diagram is as follows:
insert image description here

【Architecture Description】:

  1. HAL : As the name implies, the hardware abstraction layer is a layer that is independently encapsulated to adapt to different hardware. The task of the audio hardware abstraction layer is to truly associate AudioFlinger/AudioPolicyService with hardware devices. The Audio HAL customized by the manufacturer or the tinyalsa that comes with Android The program and the Linux kernel driver constitute the lowest-level software program for the interaction between the Android system and the hardware. Among them, tinyalsa is a tailoring of the ALSA architecture of the audio subsystem of linux, and TinyALSA of Android is based on the transformation of Linux ALSA.

  2. AudioPolicy and AndroidFlinger build the Android audio framework layer as the core, and directly interact with the lowest-level program.

    The main responsibilities are:
      
    ★ Write audio data to the HAL layer for audio playback. Collect audio data from the HAL layer for audio data transmission or storage. (audioFlinger)
    ★ According to the application scenario and specific configuration, control the audio channel, that is, under what scenario, which device the sound should come from (audioPolicy is realized by controlling audioFlinger) AudioFlinger: mainly responsible for the management of audio streaming
    devices and Processing and transmission of audio stream data, volume calculation, resampling, mixing, sound effects, etc. Receive the data of multiple APPs, combine and deliver them; it is the executor of the strategy, such as how to communicate with the audio device, how to maintain the audio device in the existing system, and how to deal with the mixing of multiple audio streams, etc. it comes done.
    (1) Manage the input and output devices of the entire audio.
    (2) Integrate multiple audio streams into one PCM audio stream, pointing to the arranged output device for output.

    AudioPolicyService : Mainly responsible for audio policy, volume adjustment, device selection, audio channel selection, etc. Decide which device to output, use headphones to connect to headphones, and use Bluetooth to connect to Bluetooth devices; it is the policy maker, such as when to turn on the audio interface device, what device a certain Stream type of audio corresponds to, and so on.

  3. Mediaplayer , Audiotrack , AudioService , AudioManager , AudioRecord and MediaRecorder are interfaces provided by the Android audio framework layer to the upper layer.

    Mediaplayer and AudioTrack are the interfaces that we choose when playing audio. What is the difference between the two? Mediaplayer is widely used, it can decode undecoded media files, and then output them to the device, while the function of AudioTrack is relatively simple, it can only play PCM stream files (that is, decoded files).

    AudioRecord and MediaRecorder are two sets of audio recording APIs provided by AndroidSDK. Among them, MediaRecorder is a higher-level API, which can directly compress and encode the audio data recorded by the microphone of the mobile phone (such as mp3), and store it as a file. And AudioRecord is more low-level, allowing developers to obtain PCM audio stream data in memory, which is suitable for further processing of audio (for example, sound effects, compression by third-party encoding libraries, or network transmission, etc.).
    Internally, MediaRecorder also calls AudioRecord to interact with AudioFlinger of the Framework layer.

    AudioService monitors the intent from HDMI , FM and other applications, and notifies the audiosystem . It actually monitors the volume of the user and realizes the synchronization of the volume on the UI.

    AudioManger provides an interface for the upper layer to access the volume and control the ringer mode.

    AudioSystem is equivalent to the internal class of AudioManager and AudioService, only for them to adjust and set the state of the phone.

  4. The application program is responsible for the logical realization of the customer's business requirements.

In this way, the basic framework of andriod audio is built from bottom to top.

2. Function and understanding of Audio HAL layer

AudioFlinger operates the underlying device indirectly by operating the Audio HAL layer. (Reading and writing of audio data and setting of various parameters)
The code file name of Audio HAL is: audio_hw_hal.cpp, first understand what capabilities (data flow) the HAL layer has, and then analyze what strategy the upper layer uses to implement the bottom layer. Control (control flow), which can greatly facilitate us to quickly understand the logic and implementation of control flow.

2.1. Framework analysis of Audio HAL layer

1. There are two parts of the HAL layer:
there is the HAL of audio, and the HAL of Audio_policy (we don't care about the HAL of Audio_policy, and it is basically discarded). The next layer of the HAL layer uses TiniAlsa (the cut version of the AlSA library).
2. Key classes and structures:
For convenience, HAL must provide a unified interface to the upper layer, and the operating hardware will also have a set of interfaces/a class.
★ Provide interface struct audio_hw_device: struct audio_hw_device interface is in audio_hw_hal.cpp file, it is the encapsulation of hw_device_t structure. The definition of the hw_device_t structure is in the path hardware/libhardware/include/hardware/hardware.h
★ Downward access to the hardware class AudioHardware: generally implemented in device/"platform manufacturer"/common/libaudio/audioHardware.cpp, provided by the manufacturer, inside The interface of the tinyalsa library is used. Manufacturers implement hardware access interfaces, and Android specifies a set of interfaces for it.
3. HAL data structure:

	/* 上下衔接
	 * Audio HAL的调用流程总结上层应用程序调用
	 * audio_hw_hal.cpp中的legacy_adev_open()
	 * 会得到一个struct audio_hw_device结构体,
	 * 这个结构体代表上层使用硬件的接口,这个
	 * 结构体中的函数都依赖于厂家提供的
	 * struct AudioHardwareInterface结构。
	 */
	struct legacy_audio_device {
    
    
	    struct audio_hw_device device;      //规范了向上提供的接口
	    struct AudioHardwareInterface *hwif;//向下访问硬件,指向厂家的AudioHardware
	};
	 
	/*由于HAL对上层直接提供的接口中没有read/write函数
	 *因此,应用程序想要录音或播放声音的时候需要先打开
	 *output或input(audio_hw_device中的open_output_stream/open_input_stream),
	 *进而使用其里面的write/read函数通过声卡硬件读/写音频数据。
	 *这也就是audio_hw_device与audio_stream_out/audio_stream_in之间的关系
	 */
	struct legacy_stream_out {
    
    
	    struct audio_stream_out stream; //规范了向上提供的接口
	    AudioStreamOut *legacy_out;     //向下访问硬件,指向厂家的AudioStreamOutALSA
	};
	 
	struct legacy_stream_in {
    
    
	    struct audio_stream_in stream;//规范了向上提供的接口
	    AudioStreamIn *legacy_in;     //向下访问硬件,指向厂家的AudioStreamInALSA
	};

struct audio_hw_device device;
struct audio_stream_out stream;
struct audio_stream_in stream;
these three structures are declared and defined under the path hardware/libhardware/include/hardware/audio.h .

2.2. Source code analysis of Audio HAL layer

Here we mainly analyze open devices , write data operations , read data operations , obtain parameters , and set parameters .

1. Open device:
AudioFlinger will load and open the device, the process is as follows:

AudioFlinger::loadHwModule
->AudioFlinger::loadHwModule_l
-->load_audio_interface
--->audio_hw_device_open(mod, dev);//获取audio_hw_device_t结构体以及它的操作
---->module->methods->open //这里对应的是legacy_adev_open

AudioFlinger calls the legacy_adev_open method of the HAL layer. Here, various function methods of the open device are initialized to realize the connection between the hardware device and the hardware interface of the third-party platform. The code implementation is as follows:

static int legacy_adev_open(const hw_module_t* module, const char* name,
                            hw_device_t** device)
{
    
    
    struct legacy_audio_device *ladev;
    int ret;
 
    if (strcmp(name, AUDIO_HARDWARE_INTERFACE) != 0)
        return -EINVAL;
 
    ladev = (struct legacy_audio_device *)calloc(1, sizeof(*ladev));
    if (!ladev)
        return -ENOMEM;
 
    //结构体赋值
    ladev->device.common.tag = HARDWARE_DEVICE_TAG;
    ladev->device.common.version = AUDIO_DEVICE_API_VERSION_2_0;
    ladev->device.common.module = const_cast<hw_module_t*>(module);
    ladev->device.common.close = legacy_adev_close;
    ladev->device.init_check = adev_init_check;
    ladev->device.set_voice_volume = adev_set_voice_volume;
    ladev->device.set_master_volume = adev_set_master_volume;
    ladev->device.get_master_volume = adev_get_master_volume;
    ladev->device.set_mode = adev_set_mode;
    ladev->device.set_mic_mute = adev_set_mic_mute;
    ladev->device.get_mic_mute = adev_get_mic_mute;
    ladev->device.set_parameters = adev_set_parameters;
    ladev->device.get_parameters = adev_get_parameters;
    ladev->device.get_input_buffer_size = adev_get_input_buffer_size;
    ladev->device.open_output_stream = adev_open_output_stream;
    ladev->device.close_output_stream = adev_close_output_stream;
    ladev->device.open_input_stream = adev_open_input_stream;
    ladev->device.close_input_stream = adev_close_input_stream;
    ladev->device.dump = adev_dump;
    /* 关键点:
     * audio_hw_device_t结构体 和 hwif(hardwareInterface)接口之间建立联系
     * 这里通过createAudioHardware 获取 实现hardwareInterface接口的厂商指针
     * 后面调用hwif的相关操作 <=等价=> 使用厂商的库函数中的方法
     */
    ladev->hwif = createAudioHardware();
    if (!ladev->hwif) {
    
    
        ret = -EIO;
        goto err_create_audio_hw;
    }
    *device = &ladev->device.common;
    return 0;
 
err_create_audio_hw:
    free(ladev);
    return ret;
}

Through the above analysis, from the Framework Native layer to the HAL layer framework, and then to the call of the third-party platform vendor library, the relationship between them is opened.

2. Write data operation:
Starting from the Native layer of the Framework, the audio_hw_device_t structure will first obtain the audio_stream_out through adev_open_output_stream, so start analyzing from it. code show as below:

static int adev_open_output_stream(struct audio_hw_device *dev,
                                   audio_io_handle_t handle,
                                   audio_devices_t devices,
                                   audio_output_flags_t flags,
                                   struct audio_config *config,
                                   struct audio_stream_out **stream_out,
                                   const char *address __unused)
{
    
    
    struct legacy_audio_device *ladev = to_ladev(dev);
    status_t status;
    struct legacy_stream_out *out;
    int ret;
 
    //这里的legacy_stream_out <=结构体类型 等价=>audio_stream_out
    out = (struct legacy_stream_out *)calloc(1, sizeof(*out));
    if (!out)
        return -ENOMEM;
 
    devices = convert_audio_device(devices, HAL_API_REV_2_0, HAL_API_REV_1_0);
    //这里将audio_stream_out与ladev->hwif之间建立联系
    out->legacy_out = ladev->hwif->openOutputStreamWithFlags(devices, flags,
                                                    (int *) &config->format,
                                                    &config->channel_mask,
                                                    &config->sample_rate, &status);
    if (!out->legacy_out) {
    
    
        ret = status;
        goto err_open;
    }
 
    out->stream.common.get_sample_rate = out_get_sample_rate;
    out->stream.common.set_sample_rate = out_set_sample_rate;
    out->stream.common.get_buffer_size = out_get_buffer_size;
    out->stream.common.get_channels = out_get_channels;
    out->stream.common.get_format = out_get_format;
    out->stream.common.set_format = out_set_format;
    out->stream.common.standby = out_standby;
    out->stream.common.dump = out_dump;
    out->stream.common.set_parameters = out_set_parameters;
    out->stream.common.get_parameters = out_get_parameters;
    out->stream.common.add_audio_effect = out_add_audio_effect;
    out->stream.common.remove_audio_effect = out_remove_audio_effect;
    out->stream.get_latency = out_get_latency;
    out->stream.set_volume = out_set_volume;
    out->stream.write = out_write;
    out->stream.get_render_position = out_get_render_position;
    out->stream.get_next_write_timestamp = out_get_next_write_timestamp;
 
    //将out->stream写回到参数 stream_out中
    *stream_out = &out->stream;
    return 0;
 
err_open:
    free(out);
    *stream_out = NULL;
    return ret;
}

Here, the audio output stream (audio_stream_out type) is obtained through hwif, and then out->stream is initialized, and the write function out_write is registered and then returned to the passed pointer variable stream_out (audio_stream_out type), when the upper layer performs a write operation , the out_write function will be executed, the code is as follows:

static ssize_t out_write(struct audio_stream_out *stream, const void* buffer,
                         size_t bytes)
{
    
    
    struct legacy_stream_out *out = reinterpret_cast<struct legacy_stream_out *>(stream);
    return out->legacy_out->write(buffer, bytes);
}

Here, the write (out->legacy_out->write) method of the third-party platform manufacturer library is directly called (if this is a Qualcomm platform, the so-called hwif is AudioHardwareALSA, and the so-called write is the write method of AudioStreamOutALSA, which will eventually use pcm_write write data operation).

3. Read data operation:
Starting from the Native layer of the Framework, the audio_hw_device_t structure will first obtain the audio_stream_in through adev_open_Input_stream, so the analysis starts from it. code show as below:

 
/** This method creates and opens the audio hardware input stream */
static int adev_open_input_stream(struct audio_hw_device *dev,
                                  audio_io_handle_t handle,
                                  audio_devices_t devices,
                                  struct audio_config *config,
                                  struct audio_stream_in **stream_in,
                                  audio_input_flags_t flags __unused,
                                  const char *address __unused,
                                  audio_source_t source __unused)
{
    
    
    struct legacy_audio_device *ladev = to_ladev(dev);
    status_t status;
    struct legacy_stream_in *in;
    int ret;
	//这里的legacy_stream_in <=结构体类型 等价=>audio_stream_in
    in = (struct legacy_stream_in *)calloc(1, sizeof(*in));
    if (!in)
        return -ENOMEM;
 
    devices = convert_audio_device(devices, HAL_API_REV_2_0, HAL_API_REV_1_0);
	//这里将audio_stream_in与ladev->hwif之间建立联系
    in->legacy_in = ladev->hwif->openInputStream(devices, (int *) &config->format,
                                                 &config->channel_mask, &config->sample_rate,
                                                 &status, (AudioSystem::audio_in_acoustics)0);
    if (!in->legacy_in) {
    
    
        ret = status;
        goto err_open;
    }
 
    in->stream.common.get_sample_rate = in_get_sample_rate;
    in->stream.common.set_sample_rate = in_set_sample_rate;
    in->stream.common.get_buffer_size = in_get_buffer_size;
    in->stream.common.get_channels = in_get_channels;
    in->stream.common.get_format = in_get_format;
    in->stream.common.set_format = in_set_format;
    in->stream.common.standby = in_standby;
    in->stream.common.dump = in_dump;
    in->stream.common.set_parameters = in_set_parameters;
    in->stream.common.get_parameters = in_get_parameters;
    in->stream.common.add_audio_effect = in_add_audio_effect;
    in->stream.common.remove_audio_effect = in_remove_audio_effect;
    in->stream.set_gain = in_set_gain;
    in->stream.read = in_read;
    in->stream.get_input_frames_lost = in_get_input_frames_lost;
	//将in->stream写回到参数 stream_in中
    *stream_in = &in->stream;
    return 0;
 
err_open:
    free(in);
    *stream_in = NULL;
    return ret;
}

Here, the audio input stream (audio_stream_in type) is obtained through hwif, and then in->stream is initialized, and the write function in_read is registered and then returned to the passed pointer variable stream_in (audio_stream_in type), when the upper layer performs a write operation , the in_read function will be executed, the code is as follows:

static ssize_t in_read(struct audio_stream_in *stream, void* buffer,
                       size_t bytes)
{
    
    
    struct legacy_stream_in *in =
        reinterpret_cast<struct legacy_stream_in *>(stream);
    return in->legacy_in->read(buffer, bytes);
}

Here, the read (out->legacy_in->read) method of the third-party platform manufacturer library is directly called (if this is a Qualcomm platform, the so-called hwif is AudioHardwareALSA, and the so-called read is the read method of AudioStreamInALSA, which will eventually use pcm_read read data operation).

4. Obtaining parameters:
Starting from the Native layer of the Framework, the operation of obtaining parameters of the audio_hw_device_t structure will definitely call adev_get_parameters at the end, so the analysis starts from it. code show as below:

static char * adev_get_parameters(const struct audio_hw_device *dev,
                                  const char *keys)
{
    
    
    const struct legacy_audio_device *ladev = to_cladev(dev);
    String8 s8;
 
    s8 = ladev->hwif->getParameters(String8(keys));
    return strdup(s8.string());
}

Here, the getParameters method of the third-party platform vendor library is directly called.

5. Setting parameters:
Starting from the Native layer of the Framework, the operation of setting parameters of the audio_hw_device_t structure must be called to adev_set_parameters at the end, so the analysis starts from it. code show as below:

static int adev_set_parameters(struct audio_hw_device *dev, const char *kvpairs)
{
    
    
    struct legacy_audio_device *ladev = to_ladev(dev);
    return ladev->hwif->setParameters(String8(kvpairs));
}

Here, the setParameters method of the third-party platform vendor library is directly called.

2.3. Summary of the process of the Audio HAL layer

  1. Determine the name of the library file from the configuration file.
    The HAL file is generally located under /system/lib/hardware, and the name of the library file that operates the hardware in the audio is specified in /system/etc/policy_config (the HAL part provided by the manufacturer).

  2. Load library (*.so) files. Open the open function in the HAL file. The audio_hw_device structure will be constructed in the HAL. There are various functions in the structure,
    especially open_output_stream /
    open_input_stream. AudioFlinger constructs an AudioHwDev object based on the audio_hw_device structure and puts it into mAudioHwDevs.

  3. Call the open_input_stream/open_output_stream of the HAL structure audio_hw_device, which will construct the audio_stream_in/audio_stream_out structure.

  4. Write/read data, call some operations of tinyALSA, directly use the system call to control the sound card is the tinyalsa library, located in the directory /external/tinyalsa, compile and generate the library file libtinyalsa.so (only involves two files mixer.c, pcm .c), compile and generate tools tinycap, tinymix, tinypcminfo, tinyplay, which can be used to directly control the audio channel and perform recording and broadcasting tests. Use the pcm_XXX operation to operate the sound card, which is the encapsulation of the driver layer.

  5. The tinyALSA library then operates the audio driver, and the audio driver then operates the hardware sound card device.

Summarize

After figuring out the software architecture of Audio, it is indispensable to understand and learn the following layers. However, the premise may not be clear, and a general impression is enough. After contacting for a while later, Looking back, you will find more ingenuity in the software framework.

Reference link:
https://blog.csdn.net/vviccc/article/details/105417542
https://www.cnblogs.com/ouyshy/p/13445250.html

Guess you like

Origin blog.csdn.net/DADWAWFMA/article/details/126439820
Recommended