Android Audio Driver Basics

         The bottom layer of Android Audio uses the liunx alsa driver. The basic hardware required for recording or playback or sound is an audio chip or sound card (Sound cards). The alsa sound card device is shown in the following figure:

1 Control // Monitor some audio streaming status on the sound card

2 Mixer // Responsible for routing or mixing various analog signals on the sound card

3 Pulse Code Modulation (PCM) //Responsible for audio stream recording and playback

//获取系统支持的声卡 
msmnile_gvmq:/ # cat /proc/asound/cards

// 获取系统支持的pcm设备节点
msmnile_gvmq:/ # lsof /dev/snd/pcmC0D
定义说明例如:pcmC0D16c
C0:card 0 ,声卡ID
D16:device 16 :音频设备 ID
c:capture 支持录音
p:playback 支持播音

// 获取音频PCM设备详细描述
msmnile_gvmq:/ # cat /proc/asound/pcm
00-00: MultiMedia1 (*) :  : playback 1 : capture 1 
00-01: MultiMedia2 (*) :  : playback 1 : capture 1 
00-02: VoiceMMode1 (*) :  : playback 1 : capture 1
定义说明例如:00-00: MultiMedia1 (*) : : playback 1 : capture 1
00:声卡ID
00:设备ID
MultiMedia1 :别称
playback 1 :播音
capture 1 : 录音

// 查看当前占用snd相关音频设备 
msmnile_gvmq:/ # lsof | grep snd 
[email protected] 17112 audioserve  mem       REG             259,45     23948       1294 /vendor/lib/libsndmonitor.so 
[email protected] 17112 audioserve    7u      CHR              116,2       0t0       8060 /dev/snd/controlC0 [email protected] 17112 audioserve    8u      CHR              116,2       0t0       8060 /dev/snd/controlC0 
[email protected] 17112 audioserve   54u      CHR              116,2       0t0       8060 /dev/snd/controlC0 [email protected] 17112 audioserve   60u      CHR              116,3       0t0      14418 /dev/snd/pcmC0D0p 

Channel configuration----mapping the relationship between usecase and pcm

比如蓝牙电话录音,是走id为29的pcm
// hardware/qcom/audio/configs/msmnile_au/audio_platform_info.xml 
<usecase name="USECASE_AUDIO_HFP_SCO" type="in" id="29" /> 
<usecase name="USECASE_AUDIO_HFP_SCO" type="out" id="29" /> 
<usecase name="USECASE_AUDIO_HFP_SCO_WB" type="in" id="29" /> 
<usecase name="USECASE_AUDIO_HFP_SCO_WB" type="out" id="29" /> 

​
除了audio_platform_info.xml,pcm和usecase的默认映射是在
vendor/qcom/opensource/audio-hal/primary-hal/hal/msm8974/platform.c#368
​

Mixer---for Audio route (FE to BE)

// hardware/qcom/audio/configs/msmnile_au/mixer_paths_adp.xml 
<path name="hfp-sco">      
    <ctl name="AUX_PCM_RX Audio Mixer MultiMedia6" value="1" /> //value = 1 含义是开启
    <ctl name="MultiMedia6 Mixer TERT_TDM_TX_0" value="1" /> 
</path> 

或mixer配置
<!-- These are actual sound device specific mixer settings -->
<path name="adc1">
    <ctl name="AIF1_CAP Mixer SLIM TX7" value="1"/>
    <ctl name="SLIM_0_TX Channels" value="One" />
    <ctl name="SLIM TX7 MUX" value="DEC6" />
    <ctl name="DEC6 MUX" value="ADC1" />
    <ctl name="IIR1 INP1 MUX" value="DEC6" />
</path>

或音量调节
<ctl name="DEC1 Volume" value="84" />

Qualcomm Audio Path Settings

Briefly describe the connection process of the Qualcomm HAL layer audio channel. The audio channel is divided into three major blocks: FE PCMs, BE DAIs, and Devices. These three blocks need to be opened and connected in series to complete the setting of an audio channel.

| Front End PCMs | SoC DSP | Back End DAIs | Audio devices |

*************
PCM0 <------------> * * <----DAI0-----> Codec Headset
* *
PCM1 <------------> * * <----DAI1-----> Codec Speakers/Earpiece
* DSP *
PCM2 <------------> * * <----DAI2-----> MODEM
* *
PCM3 <------------> * * <----DAI3-----> BT
* *
* * <----DAI4-----> DMIC
* *
* * <----DAI5-----> FM
*************

Front End PCMs : audio front end, a front end corresponds to a PCM device.

FE PCMs are set when the audio stream is turned on. We first need to understand that an audio stream corresponds to a usecase. For details, please refer to: Android audio system: AudioTrack, AudioFlinger Threads, AudioHAL Usecases, AudioDriver PCMs

usecase 通俗表示音频场景,对应着音频前端FE,比如:
low_latency:按键音、触摸音、游戏背景音等低延时的放音场景
deep_buffer:音乐、视频等对时延要求不高的放音场景
compress_offload:mp3、flac、aac等格式的音源播放场景,这种音源不需要软件解码,直接把数据送到硬件解码器(aDSP),由硬件解码器(aDSP)进行解码
record:普通录音场景
record_low_latency:低延时的录音场景
voice_call:语音通话场景
voip_call:网络通话场景

start_output_stream() 代码分析:

// 根据 usecase 找到对应 FE PCM id
int platform_get_pcm_device_id(audio_usecase_t usecase, int device_type)
{
int device_id = -1;
if (device_type == PCM_PLAYBACK)
device_id = pcm_device_table[usecase][0];
else
device_id = pcm_device_table[usecase][1];
return device_id;
}

int start_output_stream(struct stream_out *out)
{
int ret = 0;
struct audio_usecase *uc_info;
struct audio_device *adev = out->dev;

// 根据 usecase 找到对应 FE PCM id
out->pcm_device_id = platform_get_pcm_device_id(out->usecase, PCM_PLAYBACK);
if (out->pcm_device_id < 0) {
ALOGE("%s: Invalid PCM device id(%d) for the usecase(%d)",
__func__, out->pcm_device_id, out->usecase);
ret = -EINVAL;
goto error_open;
}

// 为这个音频流新建一个 usecase 实例
uc_info = (struct audio_usecase *)calloc(1, sizeof(struct audio_usecase));

if (!uc_info) {
ret = -ENOMEM;
goto error_config;
}

uc_info->id = out->usecase; // 音频流对应的 usecase
uc_info->type = PCM_PLAYBACK; // 音频流的流向
uc_info->stream.out = out;
uc_info->devices = out->devices; // 音频流的初始设备
uc_info->in_snd_device = SND_DEVICE_NONE;
uc_info->out_snd_device = SND_DEVICE_NONE;
list_add_tail(&adev->usecase_list, &uc_info->list); // 把新建的 usecase 实例添加到链表中

// 根据 usecase、out->devices,为音频流选择相应的音频设备
select_devices(adev, out->usecase);

ALOGV("%s: Opening PCM device card_id(%d) device_id(%d) format(%#x)",
__func__, adev->snd_card, out->pcm_device_id, out->config.format);
if (!is_offload_usecase(out->usecase)) {
unsigned int flags = PCM_OUT;
unsigned int pcm_open_retry_count = 0;
if (out->usecase == USECASE_AUDIO_PLAYBACK_AFE_PROXY) {
flags |= PCM_MMAP | PCM_NOIRQ;
pcm_open_retry_count = PROXY_OPEN_RETRY_COUNT;
} else if (out->realtime) {
flags |= PCM_MMAP | PCM_NOIRQ;
} else
flags |= PCM_MONOTONIC;

while (1) {
// 打开 FE PCM
out->pcm = pcm_open(adev->snd_card, out->pcm_device_id,
flags, &out->config);
if (out->pcm == NULL || !pcm_is_ready(out->pcm)) {
ALOGE("%s: %s", __func__, pcm_get_error(out->pcm));
if (out->pcm != NULL) {
pcm_close(out->pcm);
out->pcm = NULL;
}
if (pcm_open_retry_count-- == 0) {
ret = -EIO;
goto error_open;
}
usleep(PROXY_OPEN_WAIT_TIME * 1000);
continue;
}
break;
}


语音通话的情景有所不同,它不是传统意义的音频流,流程大概是这样的:
进入通话时,上层会先设置音频模式为 AUDIO_MODE_IN_CALL(HAL 接口是 adev_set_mode()),再传入音频设备 routing=$device(HAL 接口是 out_set_parameters())
out_set_parameters() 中检查音频模式是否为 AUDIO_MODE_IN_CALL,是则调用 voice_start_call() 打开语音通话的 FE_PCM


Back End DAIs : audio backend, one backend corresponds to one DAI interface, one FE PCM can be connected to one or more BE DAIs

SLIM_BUS
Aux_PCM
Primary_MI2S
Secondary_MI2S
Tertiary_MI2S
Quatermary_MI2S


Audio Device : There are headset, speaker, earpiece, mic, bt, modem, etc.; different devices may be connected to different DAI interfaces, or to the same DAI interface

device 表示音频端点设备,包括输出端点(如 speaker、headphone、earpiece)和输入端点(如 headset-mic、builtin-mic)。高通 HAL 对音频设备做了扩展,比如 speaker 分为:

SND_DEVICE_OUT_SPEAKER:普通的外放设备
SND_DEVICE_OUT_SPEAKER_PROTECTED:带保护的外放设备
SND_DEVICE_OUT_VOICE_SPEAKER:普通的通话免提设备
SND_DEVICE_OUT_VOICE_SPEAKER_PROTECTED:带保护的通话免提设备

详见 platform.h 音频设备定义,下面仅列举一部分:

/* Sound devices specific to the platform
* The DEVICE_OUT_* and DEVICE_IN_* should be mapped to these sound
* devices to enable corresponding mixer paths
*/
enum {
SND_DEVICE_NONE = 0,

/* Playback devices */
SND_DEVICE_MIN,
SND_DEVICE_OUT_BEGIN = SND_DEVICE_MIN,
SND_DEVICE_OUT_HANDSET = SND_DEVICE_OUT_BEGIN,
SND_DEVICE_OUT_SPEAKER,
SND_DEVICE_OUT_HEADPHONES,
SND_DEVICE_OUT_HEADPHONES_DSD,
SND_DEVICE_OUT_SPEAKER_AND_HEADPHONES,
SND_DEVICE_OUT_SPEAKER_AND_LINE,
SND_DEVICE_OUT_VOICE_HANDSET,
SND_DEVICE_OUT_VOICE_SPEAKER,
SND_DEVICE_OUT_VOICE_HEADPHONES,
SND_DEVICE_OUT_VOICE_LINE,
SND_DEVICE_OUT_HDMI,
SND_DEVICE_OUT_DISPLAY_PORT,
SND_DEVICE_OUT_BT_SCO,
SND_DEVICE_OUT_BT_A2DP,
SND_DEVICE_OUT_SPEAKER_AND_BT_A2DP,
SND_DEVICE_OUT_AFE_PROXY,
SND_DEVICE_OUT_USB_HEADSET,
SND_DEVICE_OUT_USB_HEADPHONES,
SND_DEVICE_OUT_SPEAKER_AND_USB_HEADSET,
SND_DEVICE_OUT_SPEAKER_PROTECTED,
SND_DEVICE_OUT_VOICE_SPEAKER_PROTECTED,
SND_DEVICE_OUT_END,

/* Capture devices */
SND_DEVICE_IN_BEGIN = SND_DEVICE_OUT_END,
SND_DEVICE_IN_HANDSET_MIC = SND_DEVICE_IN_BEGIN, // 58
SND_DEVICE_IN_SPEAKER_MIC,
SND_DEVICE_IN_HEADSET_MIC,
SND_DEVICE_IN_VOICE_SPEAKER_MIC,
SND_DEVICE_IN_VOICE_HEADSET_MIC,
SND_DEVICE_IN_BT_SCO_MIC,
SND_DEVICE_IN_CAMCORDER_MIC,
SND_DEVICE_IN_END,

SND_DEVICE_MAX = SND_DEVICE_IN_END,
};


扩展这么多是为了方便设置 acdb id,比如外放和通话免提虽然都用了同样的喇叭设备,
但是这两种情景会使用不同的算法,因此需要设置不同的 acdb id 到 aDSP,
区分 SND_DEVICE_OUT_SPEAKER 和 SND_DEVICE_OUT_VOICE_SPEAKER 是为了匹配到各自的 acdb id。

Since the audio device defined by Qualcomm HAL is inconsistent with that defined by Android Framework, Qualcomm HAL will convert the audio device imported from the framework layer according to the audio scene. For details, see:

platform_get_output_snd_device()
platform_get_input_snd_device()
In Qualcomm HAL, we only see usecase (ie FE PCM) and device, device and BE DAI are "many-to-one" relationship , and device is connected to the only BE DAI (the reverse is not true , BE DAI may be connected to multiple devices), so if you confirm the device, you can also determine the connected BE DAI.

routing

我们在 mixer_pahts.xml 中看到 usecase 相关的通路:

<path name="deep-buffer-playback speaker">
<ctl name="QUAT_MI2S_RX Audio Mixer MultiMedia1" value="1" />
</path>

<path name="deep-buffer-playback headphones">
<ctl name="TERT_MI2S_RX Audio Mixer MultiMedia1" value="1" />
</path>

<path name="deep-buffer-playback earphones">
<ctl name="QUAT_MI2S_RX Audio Mixer MultiMedia1" value="1" />
</path>

<path name="low-latency-playback speaker">
<ctl name="QUAT_MI2S_RX Audio Mixer MultiMedia5" value="1" />
</path>

<path name="low-latency-playback headphones">
<ctl name="TERT_MI2S_RX Audio Mixer MultiMedia5" value="1" />
</path>

<path name="low-latency-playback earphones">
<ctl name="QUAT_MI2S_RX Audio Mixer MultiMedia5" value="1" />
</path>

These paths are actually routes connecting usecases and devices. For example, "deep-buffer-playback speaker" is a route connecting deep-buffer-playback FE PCM and speaker Device, and if "deep-buffer-playback speaker" is enabled, deep-buffer-playback FE PCM and speaker Device are connected ; Close "deep-buffer-playback speaker", then disconnect the connection between deep-buffer-playback FE PCM and speaker Device.

It was mentioned before that "the device is connected to the only BE DAI, and the connected BE DAI can be determined by confirming the device", so these routing paths actually imply the connection of BE DAI: FE PCM is not directly connected to the device, but FE PCM connects to BE DAI first, and BE DAI connects to device. This is helpful to understand the routing control. The routing control is oriented to the connection between FE PCM and BE DAI. The name of the routing control of the playback type is generally: $BE_DAI Audio Mixer $FE_PCM, and the name of the routing control of the recording type is generally: $ FE_PCM Audio Mixer $BE_DAI, that's easy to tell.

For example, the routing control in the "deep-buffer-playback speaker" channel:

<ctl name="QUAT_MI2S_RX Audio Mixer MultiMedia1" value="1" />
MultiMedia1: FE PCM corresponding to deep_buffer usacase
QUAT_MI2S_RX: BE DAI Audio Mixer connected to the speaker device
: indicates DSP routing function
value: 1 indicates connection, 0 indicates disconnection Open Connection
This control means: connect MultiMedia1 PCM with QUAT_MI2S_RX DAI. This control does not indicate the connection between QUAT_MI2S_RX DAI and the speaker device, because there is no need for a routing control between BE DAIs and Devices. As emphasized before, "device is connected to the only BE DAI, and the device can also be determined to be connected. BE DAI".

The switch of the routing control not only affects the connection or disconnection of FE PCMs and BE DAIs, but also enables or disables BE DAIs. If you want to understand this in depth, you need to study the ALSA DPCM (Dynamic PCM) mechanism, just a little understanding here .

The routing operation function is enable_audio_route()/disable_audio_route(), the names of these two functions fit well, and they control the connection or disconnection between FE PCMs and BE DAIs.

alsa debug command

tinymix

Channel switchable or configurable

tinycap

Recording command, use tinymix to switch to the audio channel before using the command

tinycap /sdcard/test.pcm -D 0 -d 0 -c 4 -r 48000 -b 32 -p 768 -n 10 
-D  card    声卡
-d  device  设备
-c  channels  通道 
-r  rate   采样率 
-b  bits   pcm 位宽 
-p  period_size   一次中断的帧数 
-n  n_periods     周期数 

Example of use

//要先使用tinymix进行通道配置 
tinymix "MultiMedia2 Mixer QUAT_TDM_TX_0" 1 
tinycap /data/test.wav -c 8 -d 1 

tinyplay

Broadcast command, usage example

//要先使用tinymix进行通道配置 
tinymix "QUAT_TDM_RX_0 Channels" "Two" 
tinymix "QUAT_TDM_RX_0 Audio Mixer MultiMedia1" "1" 
tinyplay /sdcard/Music/LoveYou.wav 

Guess you like

Origin blog.csdn.net/wangbuji/article/details/126374182