1. Introduction to Audio Audio Architecture
Two, Android Audio system framework
3. Audio architecture and code distribution diagram of each layer
Fourth, the audio framework is further refined in the Android system
Five, create a sound card and register the sound card
Six, the structure of the Android Audio system
Seven, Audio audio principle introduction
Eight, Audio audio strategy formulation and strategy execution call process
Nine, Android AudioPolicyService service startup process
10. All audio interface devices in the Android system are saved to the member variable mAudioHwDevs of AudioFlinger
11. Audio_policy.conf defines multiple audio interfaces at the same time
12. Load the library files corresponding to each audio interface through AudioFlinger's loadHwModule to call the PlaybackThread playback thread or RecordThread recording thread
Thirteen, AudioFlinger's openInput() method call process analysis
14. Analysis of the call flow of AudioFlinger's openOutput() method
15. In order to play audio data normally, the Audio system needs to create an abstract audio output interface object and open the audio output process
16. The process of opening the audio input
17. After turning on the audio output, the manifestation in AudioFlinger and AudioPolicyService
18. After the audio input is turned on, the form of expression in AudioFlinger and AudioPolicyService
19. AudioPolicyService loads all audio interfaces defined by the system and generates corresponding data objects
Twenty, the relationship between AudioPolicyService and AudioTrack and AudioFlinger
21. The process of AudioPolicyService registered as a service
Twenty-two, AudioTrack construction process
Twenty-three, the relationship between AudioTrack and AudioFlinger
Twenty-four, the relationship between audio_policy and AudioPolicyService, AudioPolicyCompatClient
1. Introduction to Audio Audio Architecture
APP
The top layer of the entire audio system
Framework
MediaPlayer and MediaRecorder, AudioTrack and AudioRecorder, the Android system provides AudioManager, AudioService and AudioSystem classes for controlling the audio system, which are all designed by the framework to facilitate the development of upper-level applications
Libraries
System services AudioFlinger and AudioPolicyService (such as: ServiceManager, LocationManagerService, ActivityManagerService, etc.), another important system service in the audio system is MediaPlayerService
THING
The hardware abstraction layer is the object directly accessed by AudioFlinger. This illustrates two problems. On the one hand, AudioFlinger does not directly call the underlying driver; on the other hand, the upper layer of AudioFlinger (including the MediaPlayerService at the same layer) only needs to interact with it. The audio-related functions can be realized by interacting. Therefore, we can consider AudioFlinger to be the real "isolation board" in the Android audio system. No matter how the following changes, the upper-level implementation can remain compatible. The hardware abstraction layer of audio is mainly divided into two parts, namely AudioFlinger and AudioPolicyService. In fact, the latter is not a real device, but a virtual device is used to allow manufacturers to easily customize their own strategies. The task of the abstraction layer is to really associate AudioFlinger/AudioPolicyService with the hardware device.
In the past, the Audio system in the Android system relied on ALSA-lib, but later became tinyalsa. Such a change should not cause damage to the upper layer. Therefore, Audio HAL provides a unified interface to define the communication method between it and AudioFlinger/AudioPolicyService. This is the purpose of audio_hw_device, audio_stream_in and audio_stream_out, etc. Most of these Struct data types are just function pointer definitions, which are some "shells" ". When AudioFlinger/AudioPolicyService is initialized, they will look for the best matching implementation in the system (these implementations reside in various libraries named audio.primary.*, audio.a2dp.*) to fill these "shells"
There are two clues when understanding the Android audio system
Take the library as a clue, for example: AudioPolicyService and AudioFlinger are both in the libaudioflinger library, and a series of implementations such as AudioTrack and AudioRecorder are in the libmedia library
Taking the process as the clue, the library does not represent a process, and the process depends on the library to run. Although some classes are implemented in the same library, it does not mean that they will be called in the same process. For example, AudioFlinger and AudioPolicyService both reside in a system process named mediaserver, while AudioTrack/AudioRecorder and MediaPlayer/MediaRecorder are actually only part of the application process. They communicate with other system processes through the binder service.
Two, Android Audio system framework
3. Audio architecture and code distribution diagram of each layer
Fourth, the audio framework is further refined in the Android system
Five, create a sound card and register the sound card
Six, the structure of the Android Audio system
Seven, Audio audio principle introduction
AudioFlinger, AudioPolicyService and AudioTrack/AudioRecorder put aside MediaPlayer, MediaRecorder, which are directly related to application development, and the core of the entire audio system is constructed by these three. The first two of them are System Services, which reside in the mediaserver process and continuously process AudioTrack/AudioRecorder requests. Audio playback and recording are similar in terms of the overall process, so we focus on the analysis of AudioTrack
Start all media-related native layer services (including AudioFlinger, MediaPlayerService, CameraService, and AudioPolicyService), and the compiled mediaserver will be burned to the device's /system/bin/mediaserver path, and then started by the init process at system startup
Audio system structure
libmedia.so provides Audio interfaces. These Audio interfaces are open to the upper layer as well as to the native code.
libaudiofilnger.so provides Audio interface implementation
Audio hardware abstraction layer provides an interface to the hardware for AudioFlinger to call
Audio uses JNI and JAVA to provide interfaces to the upper layer
Audio frame part in the media library
The core framework of Android's Audio is provided in the media library, which mainly implements three classes of AudioSystem, AudioTrack and AudioRecorder. Provides the IAudioFlinger class interface. In this class, two interfaces, IAudioTrack and IAudioRecorder, can be obtained, which are used for sound playback and recording respectively. AudioTrack and AudioRecorder are implemented by calling IAudioTrack and IAudioRecorder respectively
Audio system header file
The path is: frameworks/base/include/media/
AudioSystem.h
IAudioFlinger.h
AudioTrack.h
IAudioTrack.h
AudioRecorder.h
IAudioRecorder.h
The Ixxx interface is implemented through AudioFlinger, and other interfaces provide interfaces to the upper layer through JNI
Audio system header files are in the frameworks/base/include/media/ directory, the main header files are as follows
AudioSystem.h: The interface of the Audio part of the media library to the upper-layer manager
IAudioFlinger.h: The master interface that needs to be implemented by the lower layer
AudioTrack.h: the upper interface of the playback part
IAudioTrack.h: The interface that needs to be implemented by the lower layer in the playback part
AudioRecorder.h: the upper interface of the recording part
IAudioRecorder.h: the interface that needs to be implemented by the lower layer in the recording part
The three interfaces of IAudioFlinger.h, IAudioTrack.h and IAudioRecorder.h are implemented through the inheritance of the lower layer (ie: AudioFlinger)
AudioFlinger.h, AudioTrack.h and AudioRecorder.h are the interfaces provided to the upper layer. They are not only used by local programs to call (for example: sound player, recorder, etc.), they can also provide interfaces to the Java layer through JNI
Both AudioTrack and AudioRecorder have interfaces such as start, stop and pause. The former has a write interface for sound playback, and the latter has a read interface for sound recording
AudioSystem is used to control the Audio system. It mainly contains some set and get interfaces, which is a class to the upper layer.
AudioFlinger is the core of the Audio system. The data from AudioTrack is finally processed here and written into the Audio HAL layer.
MediaPlayer will still create AudioTrack in the framework layer, pass the decoded PCM stream to AudioTrack, and then pass it to AudioFlinger for mixing, and then pass it to the hardware for playback, so MediaPlayer includes AudioTrack. Use AudioTrack to play music
MediaPlayer provides a more complete package and state control. Compared with MediaPlayer, AudioTrack is more refined and efficient. In fact, the internal implementation of MediaPlayerService uses AudioTrack to integrate all media-related native layer services (including AudioFlinger, MediaPlayerService, CameraService and AudioPolicyService) Startup, the compiled mediaserver will be burned to the device's /system/bin/mediaserver path, and then started by the init process at system startup
Two Audio Hardware HAL interface definitions
legacy:hardware/libhardware_legacy/include/hardware_legacy/AudioHardwareInterface.h
非legacy:hardware/libhardware/include/hardware/audio.h
The former is the audio device interface definition of 2.3 and before, and the latter is the interface definition of 4.0. In order to be compatible with the previous design, 4.0 implements an intermediate layer: hardware/libhardware_legacy/audio/audio_hw_hal.cpp. The structure is similar to other audio_hw.c. The difference is that the open method actually needs to be encapsulated into non-legacy audio.h. To be precise, an intermediate layer connecting legacy interface and not legacy interface is needed. Audio_hw_hal.cpp here serves as such a role.
hardware / libhardware / modules / audio /
createAudioHardware() function
external/tinyalsa/
mixer.c is a control of class alsa-lib, which functions as audio component switch, volume adjustment, etc.
pcm.c class alsa-lib pcm, used for audio pcm data playback and recording
The above hardware/libhardware_legacy/audio/, hardware/libhardware/modules/audio/, device/samsung/tuna/audio/ are in the same layer. One is legacy audio, which is compatible with alsa_sound in the 2.2 era; the second is the stub audio interface; the third is the implementation of Samsung Tuna's audio abstraction layer. Call level: AudioFlinger -> audio_hw -> tinyalsa
The implementation of the Audio hardware abstraction layer may be different in each system. You need to use code to inherit the corresponding classes and implement them. As the Android system local framework layer and driver interface AudioFlinger inherits libmedia.so (Audio local framework class) The upper layer calls only the interface of the libmedia.so part, but the content that is actually called is libaudioflinger.so, using JNI and Java to provide an interface to the upper layer, and the JNI part is implemented by calling the interface provided by the libmedia.so library
The Audio hardware abstraction layer provides an interface to the hardware for AudioFlinger to call. The Audio hardware abstraction layer is actually the part that needs to be mainly concerned and completed independently in the development process of each platform, because the Audio system in Android does not involve the coding and decoding links, and is only responsible The interaction between the upper system and the underlying Audio hardware, so PCM is usually used as the input/output format
The IAudioFlinger class interface can obtain two interfaces, IAudioTrack and IAudioRecorder, through this class, which are used for sound playback and recording respectively. AudioTrack and AudioRecorder are implemented by calling IAudioTrack and IAudioRecorder respectively.
The hardware abstraction layer mainly implements two classes, AudioStreamInALSA and AudioStreamOutALSA, which in turn will call the methods of the ALSAStreamOps class under the file. AudioStreamInALSA is the path called by the recording part. In the constructor of AudioStreamInALSA, some initialization parameter settings will be performed on alsa,
The read method of AudioStreamInALSA is the most important method, and the read call of the audioflinger layer is the call of the read of AudioStreamInALSA. due to
There is a problem of mono and dual-channel data transmission in the recording part. Modify the read method as follows to realize the normal recording function and avoid the drawbacks of other encodings still not working when the data is modified during encoding.
Eight, Audio audio strategy formulation and strategy execution call process
AudioPolicyService is the policy maker, and AudioFlinger is the executor of the policy
AudioTrack is the client of AudioFlinger, and AudioFlinger is the hub of Audio management in the Android system
In Android, AudioPolicyService will eventually be called to AudioFlinger, because AudioFlinger actually creates and manages the hardware device
The AudioFlinger class is a class that represents the entire AudioFlinger service, and all other working classes are defined in it through internal classes
Nine, Android AudioPolicyService service startup process
Work done by AudioPolicyService
Load the audio_policy.default.so library to get the audio_policy_module module
Open the audio_policy_device device through the audio_policy_module module
Create audio_policy through audio_policy_device device
The process of loading the hardware abstraction layer module with the hw_get_module function
Audio_policy is implemented in audio_policy_hal.cpp, and audio_policy_service_ops is implemented in AudioPolicyService.cpp. The create_audio_policy() function is to create and initialize a legacy_audio_policy object. AudioPolicyCompatClient is an encapsulation class for audio_policy_service_ops and provides the interface defined in the audio_policy_service_ops data structure.
Android AudioPolicyService service startup process
Refer to the AudioPolicyCompatClient object, so that the audio manager AudioPolicyManager can use the interface in audio_policy_service_ops
Load the /vendor/etc/audio_policy.conf configuration file first. If the configuration file does not exist, load the /system/etc/audio_policy.conf configuration file. If the file still does not exist, use the function defaultAudioPolicyConfig() to set the default audio interface
Set the volume adjustment points corresponding to various audio streams
Open the corresponding audio interface hardware abstraction library by name
Open the output corresponding to mAttachedOutputDevices
Encapsulate the output IOProfile as an AudioOutputDescriptor object
Set the default output device of the current audio interface
Turn on the output, create a PlaybackThread thread in AudioFlinger, and return the id of the thread
Set the output devices that can be used as mAttachedOutputDevices
Save the output descriptor object AudioOutputDescriptor and the created PlaybackThread thread id in the form of key-value pairs
Set the default output device
The following steps are mainly completed during the construction of the AudioPolicyManagerBase object
loadAudioPolicyConfig(AUDIO_POLICY_CONFIG_FILE) Load audio_policy.conf configuration file
initializeVolumeCurves() initializes the volume adjustment points corresponding to various audio streams
Load audio policy hardware abstraction library: mpClientInterface->loadHwModule(mHwModules[i]->mName)
Open attached_output_devices output mpClientInterface->openOutput();
Save the output device descriptor object addOutput(output, outputDesc);
10. All audio interface devices in the Android system are saved to the member variable mAudioHwDevs of AudioFlinger
audio_policy.conf file
It can be found from the audio_policy.conf file that the system contains audio interfaces such as primary, a2dp, usb, etc., corresponding to audio.<primary/a2dp/usb>.<device>.so in the system. Each audio interface contains several outputs & inputs, and each output or input contains several devices, as well as sampling frequency, number of channels and other information. These devices information, sampling frequency information & channel information, etc. will be stored in the IOProfile of the respective module. As described in the audio_policy.conf configuration file above, the system will finally generate 6 modules (eg.primary, a2dp, hdmi, r_submix, hs_usb & usb) and 7 outputs
11. Audio_policy.conf defines multiple audio interfaces at the same time
Different Android products usually have differences in audio design, and these differences can be obtained through Audio's configuration file audio_policy.conf. There are two storage paths for audio configuration files in the Android system, and the storage address can be obtained from the AudioPolicyManagerBase.cpp file
In the AudioPolicyManager.cpp file, you can know that the system will first load the configure file in the vendor/etc directory, and then load the configure file in the system/etc directory. If an error occurs in both loading, the system will load the default configuration file and name it the primary module
Read these global configuration information through the loadGlobalConfig(root) function, and load all the audio interfaces of the system configuration (load audio interfaces) through the loadHwModules() function. Since audio_policy.conf can define multiple audio interfaces, this function calls loadHwModule cyclically () to parse the parameter information of each audio interface. Android defines the HwModule class to describe each audio interface parameter, and defines the IOProfile class to describe the input and output mode configuration
/system/etc/audio_policy.conf
/vendor/etc/audio_policy.conf
Load the audio_module module
By reading the audio_policy.conf configuration file, AudioPolicyManager can know which audio interfaces are currently supported by the system, as well as attached input and output devices, and default output devices. Next, you need to load the hardware abstraction library for these audio interfaces
AudioPolicyClientInterface provides an interface function to load the audio interface hardware abstraction library. AudioPolicyCompatClient implements the AudioPolicyClientInterface interface through the proxy audio_policy_service_ops
AudioPolicyCompatClient transfers the audio module loading to audio_policy_service_ops, and AudioPolicyService transfers it to AudioFlinger
When AudioPolicyManagerBase is constructed, it will analyze which audio interfaces (primary, a2dp and usb) are in the system according to the audio_policy.conf provided by the user, and then load the library files corresponding to each audio interface through AudioFlinger::loadHwModule, and open them in turn output (openOutput) and input (openInput)
When opening audio output, create an audio_stream_out channel, create an AudioStreamOut object and create a new PlaybackThread playback thread
When opening the audio input, create an audio_stream_in channel, create an AudioStreamIn object and create a RecordThread recording thread
The audio_policy.conf file defines two audio configuration information
The audio input and output devices supported by the current system and the default input and output devices are set through the global_configuration configuration item. Three types of audio device information are defined in the global_configuration
attached_output_devices: connected output devices
default_output_device: Default output device
attached_input_devices: connected input devices
Audio interface information supported by the system
audio_policy.conf defines all audio interface parameter information supported by the system, such as primary, a2dp, usb, etc.
Each audio interface includes input and output, each input and output includes multiple input and output configurations, and each input and output configuration supports multiple audio devices. AudioPolicyManagerBase first loads /vendor/etc/audio_policy.conf, if the file does not exist, then loads /system/etc/audio_policy.conf
audio_policy.conf defines multiple audio interfaces at the same time, each audio interface contains several outputs and inputs, and each output and input supports multiple input and output modes at the same time, and each input and output mode supports several devices
12. Load the library files corresponding to each audio interface through AudioFlinger's loadHwModule to call the PlaybackThread playback thread or RecordThread recording thread
Thirteen, AudioFlinger's openInput() method call process analysis
14. Analysis of the call flow of AudioFlinger's openOutput() method
15. In order to play audio data normally, the Audio system needs to create an abstract audio output interface object and open the audio output process
16. The process of opening the audio input
17. After turning on the audio output, the manifestation in AudioFlinger and AudioPolicyService
18. After the audio input is turned on, the form of expression in AudioFlinger and AudioPolicyService
19. AudioPolicyService loads all audio interfaces defined by the system and generates corresponding data objects
Twenty, the relationship between AudioPolicyService and AudioTrack and AudioFlinger
21. The process of AudioPolicyService registered as a service
Twenty-two, AudioTrack construction process
Twenty-three, the relationship between AudioTrack and AudioFlinger
Twenty-four, the relationship between audio_policy and AudioPolicyService, AudioPolicyCompatClient
Follow WeChat public account to get more latest articles