ALSA subsystem (18) ------Analysis of the problem of stuck sound in fingerprint unlock animation prompt

Hello! This is Kite’s blog,
Welcome to communicate with me.

I haven’t written about kernel-related things for a long time. Mainly because after I came to the mobile phone factory, most of them are still on Android. Although the Kernel is also involved, it is only involved after all. The main business logic is still based on HAL. Kernel modifications are not limited to projects. bringup, there are basically few modifications after that.

I encountered a Kernel problem recently, so I briefly recorded it~

[Prerequisites] [Prerequistes] The mobile phone records the fingerprint sound and vibration - touch and prompt sounds to turn on the fingerprint animation sound, select the swirl style for the fingerprint style [Operation steps]
Play QQ music, press the power button to turn on the screen and unlock the fingerprint
[Actual results] There is a lag sound in the fingerprint animation sound
[Expected results] There should be no lag sound
[Number of previous version for comparison] Low probability problem

Judging from my many years of work experience, the external problem is the best solution and is relatively easy to deal with.
ivdump

Check out the AudioDspStreamManager.xx.TaskPlayback_ivdump.pcm file. This is the VI feedback signal of the PA, which can feed back the true status of the speaker. Judging from the frequency spectrum, there is indeed a lag when the unlocking animation prompt sounds.


Also check AudioDspStreamManager.xx.TaskPlayback_datain.pcm and find that there is indeed a problem with the data in the Kernel, but there is no problem when checking streamout.pcm.xx.AudioALSAPlaybackHandlerDsp.flag8.xx.48000.8_24bit.2ch_20230411_092109.wav , indicating that the problem occurs in The bottom layer of Kernel is removed and does not appear in HAL.

Because the prompt tone is a short tone, the fast path is used, and the buf is small, it is indeed prone to problems.

Check the kernel log:

	<6>[83150.878119][T600541] mtk_dsp_check_exception() deep adsp underflow
	<6>[83150.926100][T516896] snd_audio_dsp snd_audio_dsp: mtk_dsp_start() deep just underflow
	<6>[83178.409440][T700541] mtk_dsp_check_exception() fast adsp underflow
	<6>[83178.419408][T712585] snd_audio_dsp snd_audio_dsp: mtk_dsp_start() fast just underflow

An exception was indeed found and underflow occurred. This indicates that writing data is slow and there is no data to play, so underflow occurred!
View the ADSP log synchronously:

	 [83137.343]<A-22>[D]audio_dsp_hw_write_op(), HW_STATE_UNDERFLOW, return
	 [83157.758]<A-11>[D] enter_write_cond, underflow, written_size[2048] datacount[0] task_name[fast_playback]
	 [83157.758]<A-11>[W] write_data_loop() ADSP_DL_CONSUME_UNDERFLOW
	 [83166.654]<A-11>[D] enter_write_cond, underflow, written_size[2048] datacount[0] task_name[fast_playback]
	 [83166.654]<A-11>[W] write_data_loop() ADSP_DL_CONSUME_UNDERFLOW

Indeed, you can also see the underflow.

Because underflow has no data to play and the data is not written in time, it is useless to modify and increase the buffer size.
So we can only think of another way!

Here is a brief description of MTK playback:
dsp

Soul painter, hehe, I drew the picture below according to my understanding~

After Hal writes data to the kernel, it will send the data to the DSP for processing through ipi inter-core communication. The algorithm we use will run on the DSP. The processed data will be sent to the codec through iis for playback. The kernel is only responsible for management. Control of codec.

Source code analysis:
drivers/misc/mediatek/audio_ipi/common/adsp_ipi.c

static int __init audio_ipi_init(void)
{
    
    
        ipi_queue_init();

        audio_task_manager_init();
        audio_messenger_ipi_init();

        init_audio_ipi_dma();
#if IS_ENABLED(CONFIG_MTK_AUDIODSP_SUPPORT)
        adsp_register_notify(&audio_ctrl_notifier);
#endif
        for (task_id = 0; task_id < TASK_SCENE_SIZE; task_id++) {
    
    
                task_info = &g_audio_task_info[task_id];

                dsp_id = audio_get_dsp_id(task_id);

                task_info->dsp_id = dsp_id;
                task_info->is_dsp_support = is_audio_dsp_support(dsp_id);
                task_info->is_adsp = is_audio_use_adsp(dsp_id);
                task_info->is_scp = is_audio_use_scp(dsp_id);
                task_info->task_ctrl = get_audio_controller_task(dsp_id);
        }

        ret = misc_register(&audio_ipi_device);
}

drivers/misc/mediatek/audio_ipi/common/adsp_ipi_queue.c

void ipi_queue_init(void)
{
    
    
        for (dsp_id = 0; dsp_id < NUM_OPENDSP_TYPE; dsp_id++) {
    
    
                if (is_audio_dsp_support(dsp_id))
                        ipi_queue_init_by_dsp(dsp_id);
        }

#if IS_ENABLED(CONFIG_MTK_AUDIODSP_SUPPORT)
        hook_ipi_queue_send_msg_handler(dsp_send_msg_to_queue_wrap);
        hook_ipi_queue_recv_msg_hanlder(dsp_dispatch_ipi_hanlder_to_queue_wrap);
#endif
}

hook_ipi_queue_send_msg_handler and hook_ipi_queue_recv_msg_hanlder are mainly assigned to the two function pointers ipi_queue_send_msg_handler and ipi_queue_recv_msg_handler. They are used when sending msg to dsp and receiving msg. There is not much to say here, mainly: ipi_queue_init_by_dsp, which will be used here for DSP Do initialization:

int ipi_queue_init_by_dsp(uint32_t dsp_id)
{
    
    
        for (dsp_path = 0; dsp_path < DSP_NUM_PATH; dsp_path++) {
    
    
                msg_queue = &g_dsp_msg_queue[dsp_id][dsp_path];
                ret = dsp_init_single_msg_queue(msg_queue, dsp_id, dsp_path);
                if (ret != 0)
                        WARN_ON(1);
        }
}

This will cycle twice: 0 is AP to DSP, 1 is DSP to AP.

static int dsp_init_single_msg_queue(
        struct dsp_msg_queue_t *msg_queue,
        const uint32_t dsp_id,
        const uint32_t dsp_path)
{
    
    
        if (dsp_path == DSP_PATH_A2D) {
    
    
                msg_queue->dsp_process_msg_func = dsp_send_msg_to_dsp;
        } else if (dsp_path == DSP_PATH_D2A) {
    
    
                msg_queue->dsp_process_msg_func = dsp_process_msg_from_dsp;
        } else
                WARN_ON(1);

        /* lunch thread */
        msg_queue->dsp_thread_task = kthread_create(
                                             dsp_process_msg_thread,
                                             msg_queue,
                                             "%s",
                                             thread_name);
        if (IS_ERR(msg_queue->dsp_thread_task)) {
    
    
                pr_info("can not create %s kthread", thread_name);
                WARN_ON(1);
                msg_queue->thread_enable = false;
        } else {
    
    
                msg_queue->thread_enable = true;
                dsb(SY);
                wake_up_process(msg_queue->dsp_thread_task);
        }
}

Just focus on this. For different situations, AP to DSP or DSP to AP, different processing functions will be assigned to dsp_process_msg_func. Then the thread: dsp_process_msg_thread will be created and started: wake_up_process

static int dsp_process_msg_thread(void *data)
{
    
    
        while (msg_queue->thread_enable && !kthread_should_stop()) {
    
    
                /* wait until element pushed */
                retval = dsp_get_queue_element(msg_queue, &p_dsp_msg, &idx_msg);

                p_element = &msg_queue->element[idx_msg];

                /* send to dsp */
                retval = msg_queue->dsp_process_msg_func(msg_queue, p_dsp_msg);

                /* notify element if need */
                spin_lock_irqsave(&p_element->element_lock, flags);
                if (p_element->wait_in_thread == true) {
    
    
                        p_element->send_retval = retval;
                        p_element->signal_arrival = true;
                        dsb(SY);
                        wake_up_interruptible(&p_element->element_wq);
                }
                spin_unlock_irqrestore(&p_element->element_lock, flags);

                /* pop message from queue */
                spin_lock_irqsave(&msg_queue->queue_lock, flags);
                dsp_pop_msg(msg_queue);
                spin_unlock_irqrestore(&msg_queue->queue_lock, flags);
        }
        return 0;
}

Processing in the thread is relatively simple, just wait for data to arrive in the queue, and if so, send it to the DSP for processing through the previously filled dsp_process_msg_func hook function.

Okay, after all this talk, let’s get back to the topic. If underflow occurs, it means that the data is not written in time. The data is written to the DSP through the thread dsp_process_msg_thread. Therefore, we only need to ensure the normal scheduling and operation of this thread. .

How to guarantee this? Naturally, increase the thread priority!

In order to enable users to have a good user experience, when creating a thread in dsp_init_single_msg_queue, the thread is directly added to the RT thread, RT (Real Thread) real-time thread.
After the kthread_create thread is created, use this API:

struct sched_param param = {
    
     .sched_priority = 3 };
sched_setscheduler_nocheck(msg_queue->dsp_thread_task, SCHED_FIFO, &param);

You can join the RT thread.

After joining the RT thread, there was no underflow in the stress test problem scenario, and the lagging problem did not recur. The problem was perfectly solved~

In the initial audio_ipi_init function, there are some contents that I will not elaborate on. There is relatively little information on DSP and it is still difficult to understand. It is not the focus of this article.
After audio_ipi_init is configured DMA and TASK, each scene occupies a Task:

        /* scene for library */
        TASK_SCENE_PHONE_CALL           = 0,
        TASK_SCENE_VOICE_ULTRASOUND     = 1,
        TASK_SCENE_PLAYBACK_MP3         = 2,
        TASK_SCENE_RECORD               = 3,
        TASK_SCENE_VOIP                 = 4,
        TASK_SCENE_SPEAKER_PROTECTION   = 5,
        TASK_SCENE_VOW                  = 6,
        TASK_SCENE_PRIMARY              = 7,
        TASK_SCENE_DEEPBUFFER           = 8,
        TASK_SCENE_AUDPLAYBACK          = 9,
        TASK_SCENE_CAPTURE_UL1          = 10,
        TASK_SCENE_A2DP                 = 11,
        //......

OK, I won’t explain it later, let’s talk about it later~

Guess you like

Origin blog.csdn.net/Guet_Kite/article/details/130629809