Android Framework Audio Subsystem (11) Sound channel switching of headset plugging and unplugging

This series of articles Master link: Thematic sub-directory Android Framework Class Audio Subsystem


Summary and description of key points in this chapter:

This chapter mainly focuses on the sound channel switching part in the plugging and unplugging part of the headset in the upper left of the above mind map. It mainly explains the principle of channel switching and the process analysis of channel switching.


1 The principle description of the sound channel switching of headset plugging and unplugging

1.1 Scenario analysis of switching sound channels

There are two cases for analysis here, one is the insertion of a USB sound card, and the other is the insertion of a headset on the primary device.

Insert the USB sound card:

  1. You can find the corresponding modle in usb from the configuration file audio_policy. There must be outputs containing usb_accessory and usb_device. After plugging in the USB sound card, output and corresponding playbackthread will be created. (The two outputs usb_accessory and usb_device correspond to one each).
  2. The playBackThread thread that previously established contact with the onboard sound card should be switched to the USB-created playBackThread, and the APP's AudioTrack will switch from the original playbackthread / output mode to the new playbackthread / output. In the new playBackThread, each APP creates a corresponding Track.
  3. Select Device in the output, do some settings again, and decide whether to play the sound from the headphones or speakers.

Plug in the headphones in the primary:

  1. There is no need to create output and playbackthread, because in this case the thread they are involved in has not changed, so there is no need to re-create output and playBackThread.
  2. No need to switch output.
  3. Select Device (Headset) in the original output.

There is not much difference in the process. First of all, you need to judge whether you need to create an output, whether the playback sound needs a new thread to process, and finally select device in the output.

1.2 Three steps to switch the sound channel core

Interruption occurs when the hardware is plugged into the headset. Set the sound card in the interrupt handler to let the sound output from the headset. The driver reports an audio plug-in event. This event is for a device to plug or unplug. Android system, the sound channel is switched by the Android system. Android system switches sound channels

The three core steps are as follows:

@1 checkOutputsForDevice

For this device, open a new output and create a new playback thread. Determine from the audio_policy.conf "how many outputs should have been" can support it, mOutputs said "the output has been opened", the two can be compared to determine "the output has not been opened"

@2 checkOutputForAllStrategies / checkOutputForStrategy

For all strategy grouping sounds, determine whether it is necessary to migrate to a new output, and if necessary, migrate the corresponding Track to a new output, here involves 2 judgments

@@ 2.1 Determine if migration is required:

  1. For the strategy, get its oldDevice, and then get its outputs (srcOutputs);
  2. For the strategy, get its newDevice, and then get its outputs (dstOutputs);
  3. If the two srcOutputs and dstOutputs are not the same, it means that you need to migrate

@@ 2.2 If migrating:

Set the corresponding Track to the invalidate state. When the app writes AudioTrack and finds that it is in the invalidate state, it will recreate a new Track

  1. audio_devices_t oldDevice = getDeviceForStrategy(strategy, true /*fromCache*/);
  2. audio_devices_t newDevice = getDeviceForStrategy(strategy, false /*fromCache*/);
  3. SortedVector<audio_io_handle_t> srcOutputs = getOutputsForDevice(oldDevice, mPreviousOutputs);
  4. SortedVector<audio_io_handle_t> dstOutputs = getOutputsForDevice(newDevice, mOutputs);

@ 3 getNewOutputDevice / setOutputDevice This needs to operate the HAL layer


2 Headset plug and unplug sound channel switching source code interpretation

Here starts from the analysis of onSetWiredDeviceConnectionState in AudioService, the code implementation is as follows:

private void onSetWiredDeviceConnectionState(int device, int state, String name)
{
    synchronized (mConnectedDevices) {
        //...
        //关键点1:声道切换入口
        handleDeviceConnection((state == 1), device, (isUsb ? name : ""));
        if (state != 0) {
            //...
            if ((device & mSafeMediaVolumeDevices) != 0) {
                sendMsg(mAudioHandler,MSG_CHECK_MUSIC_ACTIVE,SENDMSG_REPLACE,0,0,null,MUSIC_ACTIVE_POLL_PERIOD_MS);
            }
            //...
        } else {
            //...
        }
        if (!isUsb && (device != AudioSystem.DEVICE_IN_WIRED_HEADSET)) {
            //关键点2:通过AMS上报intent
            sendDeviceConnectionIntent(device, state, name);
        }
    }
}

We focused on two key points here: handleDeviceConnection, the channel switching entry, and sendDeviceConnectionIntent, which reports the intent to AMS. In this chapter, we start with handleDeviceConnection. The handleDeviceConnection method of the AudioService in the Java layer can finally be directly called to the setDeviceConnectionStateInt of the AudioPolicyManager in the Native layer.

status_t AudioPolicyManager::setDeviceConnectionStateInt(audio_devices_t device,
                                                         audio_policy_dev_state_t state,
                                                         const char *device_address)
{
    if (!audio_is_output_device(device) && !audio_is_input_device(device)) return BAD_VALUE;
    sp<DeviceDescriptor> devDesc = getDeviceDescriptor(device, device_address);

    // handle output devices
    /*判断上报的是否为output_device*/
    if (audio_is_output_device(device)) {
        SortedVector <audio_io_handle_t> outputs;
        ssize_t index = mAvailableOutputDevices.indexOf(devDesc);
        mPreviousOutputs = mOutputs;
        switch (state)
        {
        // handle output device connection
        case AUDIO_POLICY_DEVICE_STATE_AVAILABLE: {
            //代表存在直接返回,否则代表为新添加的
            if (index >= 0) {
                return INVALID_OPERATION;
            }
            //添加到可用设备
            index = mAvailableOutputDevices.add(devDesc);
            if (index >= 0) {
                //根据device在可用的设备列表中查找
                sp<HwModule> module = getModuleForDevice(device);
                if (module == 0) {
                    mAvailableOutputDevices.remove(devDesc);
                    return INVALID_OPERATION;
                }
                mAvailableOutputDevices[index]->mId = nextUniqueId();
                mAvailableOutputDevices[index]->mModule = module;
            } else {
                return NO_MEMORY;
            }
            //关键点1:针对该device, 打开新的output, 创建新的playbackthread.
            if (checkOutputsForDevice(devDesc, state, outputs, devDesc->mAddress) != NO_ERROR) {
                mAvailableOutputDevices.remove(devDesc);
                return INVALID_OPERATION;
            }

            // Set connect to HALs
            AudioParameter param = AudioParameter(devDesc->mAddress);
            param.addInt(String8(AUDIO_PARAMETER_DEVICE_CONNECT), device);
            mpClientInterface->setParameters(AUDIO_IO_HANDLE_NONE, param.toString());

            } break;
        // handle output device disconnection
        case AUDIO_POLICY_DEVICE_STATE_UNAVAILABLE: {
            // Set Disconnect to HALs
            AudioParameter param = AudioParameter(devDesc->mAddress);
            param.addInt(String8(AUDIO_PARAMETER_DEVICE_DISCONNECT), device);
            mpClientInterface->setParameters(AUDIO_IO_HANDLE_NONE, param.toString());

            // remove device from available output devices
            mAvailableOutputDevices.remove(devDesc);

            checkOutputsForDevice(devDesc, state, outputs, devDesc->mAddress);
            } break;

        default:
            ALOGE("setDeviceConnectionState() invalid state: %x", state);
            return BAD_VALUE;
        }
        //...
        /*关键点2:对所有的strategy分组声音,判断是否需要迁移
         *到新的output, 如果需要则迁移对应Track到新的output
         */
        checkOutputForAllStrategies();
        //...
        for (size_t i = 0; i < mOutputs.size(); i++) {
            audio_io_handle_t output = mOutputs.keyAt(i);
            if ((mPhoneState != AUDIO_MODE_IN_CALL) || (output != mPrimaryOutput)) {
                audio_devices_t newDevice = getNewOutputDevice(mOutputs.keyAt(i),true /*fromCache*/);
                bool force = !mOutputs.valueAt(i)->isDuplicated()
                        && (!deviceDistinguishesOnAddress(device)
                                // always force when disconnecting (a non-duplicated device)
                                || (state == AUDIO_POLICY_DEVICE_STATE_UNAVAILABLE));
                setOutputDevice(output, newDevice, force, 0);
            }
        }

        mpClientInterface->onAudioPortListUpdate();
        return NO_ERROR;
    }  // end if is output device
    //... Audio input 处理
    return BAD_VALUE;
}

Next, we mainly analyze the checkOutputsForDevice method and checkOutputForAllStrategies method (flag conversion), and then analyze the data writing part (AudioTrack's write function).

2.1 checkOutputsForDevice analysis

The code implementation of checkOutputsForDevice is as follows:

status_t AudioPolicyManager::checkOutputsForDevice(const sp<DeviceDescriptor> devDesc,
                                                       audio_policy_dev_state_t state,
                                                       SortedVector<audio_io_handle_t>& outputs,
                                                       const String8 address)
{
    audio_devices_t device = devDesc->mDeviceType;
    //...
    if (state == AUDIO_POLICY_DEVICE_STATE_AVAILABLE) {
        //...
        for (ssize_t profile_index = 0; profile_index < (ssize_t)profiles.size(); profile_index++) {
            sp<IOProfile> profile = profiles[profile_index];
            // nothing to do if one output is already opened for this profile
            //...
            if (j != outputs.size()) {
                continue;
            }
            //...
            status_t status = mpClientInterface->openOutput(profile->mModule->mHandle,
                                                            &output,
                                                            &config,
                                                            &desc->mDevice,
                                                            address,
                                                            &desc->mLatency,
                                                            desc->mFlags);
            if (status == NO_ERROR) {
                //...
                if (output != AUDIO_IO_HANDLE_NONE) {
                    addOutput(output, desc);
                    if (deviceDistinguishesOnAddress(device) && address != "0") {
                        //...
                    } else if ((desc->mFlags & AUDIO_OUTPUT_FLAG_DIRECT) == 0) {
                        //...
                        // open a duplicating output thread for the new output and the primary output
                        duplicatedOutput = mpClientInterface->openDuplicateOutput(output,mPrimaryOutput);
                        //...
                    }
                }
            } else {
                output = AUDIO_IO_HANDLE_NONE;
            }
            //...
        }
        //...
    } else { // Disconnect
        //...
    }
    return NO_ERROR;
}

Each output in the configuration file audio_policy.conf will be portrayed as a profile, that is, checkOutputsForDevice will detect all profiles (output), find if there is a corresponding thread for each profile, and create it if it does not.

2.2 checkOutputForAllStrategies analysis

The code implementation of checkOutputForAllStrategies is as follows:

void AudioPolicyManager::checkOutputForAllStrategies()
{
    if (mForceUse[AUDIO_POLICY_FORCE_FOR_SYSTEM] == AUDIO_POLICY_FORCE_SYSTEM_ENFORCED)
        checkOutputForStrategy(STRATEGY_ENFORCED_AUDIBLE);
    checkOutputForStrategy(STRATEGY_PHONE);
    if (mForceUse[AUDIO_POLICY_FORCE_FOR_SYSTEM] != AUDIO_POLICY_FORCE_SYSTEM_ENFORCED)
        checkOutputForStrategy(STRATEGY_ENFORCED_AUDIBLE);
    checkOutputForStrategy(STRATEGY_SONIFICATION);
    checkOutputForStrategy(STRATEGY_SONIFICATION_RESPECTFUL);
    checkOutputForStrategy(STRATEGY_ACCESSIBILITY);
    checkOutputForStrategy(STRATEGY_MEDIA);
    checkOutputForStrategy(STRATEGY_DTMF);
    checkOutputForStrategy(STRATEGY_REROUTING);
}

Here is a detailed analysis of checkOutputForStrategy. The code implementation is as follows:

void AudioPolicyManager::checkOutputForStrategy(routing_strategy strategy)
{
    /*
     *对于该strategy, 得到它的oldDevice, 进而得到它的outputs (srcOutputs);
     *对于该strategy, 得到它的newDevice, 进而得到它的outputs (dstOutputs);
     */
    audio_devices_t oldDevice = getDeviceForStrategy(strategy, true /*fromCache*/);
    audio_devices_t newDevice = getDeviceForStrategy(strategy, false /*fromCache*/);
    SortedVector<audio_io_handle_t> srcOutputs = getOutputsForDevice(oldDevice, mPreviousOutputs);
    SortedVector<audio_io_handle_t> dstOutputs = getOutputsForDevice(newDevice, mOutputs);

    // also take into account external policy-related changes: add all outputs which are
    // associated with policies in the "before" and "after" output vectors
    for (size_t i = 0 ; i < mPreviousOutputs.size() ; i++) {
        const sp<AudioOutputDescriptor> desc = mPreviousOutputs.valueAt(i);
        if (desc != 0 && desc->mPolicyMix != NULL) {
            srcOutputs.add(desc->mIoHandle);
        }
    }
    for (size_t i = 0 ; i < mOutputs.size() ; i++) {
        const sp<AudioOutputDescriptor> desc = mOutputs.valueAt(i);
        if (desc != 0 && desc->mPolicyMix != NULL) {
            dstOutputs.add(desc->mIoHandle);
        }
    }
    //如果这2个srcOutputs、dstOutputs不相同, 表示需要迁移
    if (!vectorsEqual(srcOutputs,dstOutputs)) {
        // mute strategy while moving tracks from one output to another
        for (size_t i = 0; i < srcOutputs.size(); i++) {
            sp<AudioOutputDescriptor> desc = mOutputs.valueFor(srcOutputs[i]);
            if (desc->isStrategyActive(strategy)) {
                setStrategyMute(strategy, true, srcOutputs[i]);
                setStrategyMute(strategy, false, srcOutputs[i], MUTE_TIME_MS, newDevice);
            }
        }

        // Move effects associated to this strategy from previous output to new output
        if (strategy == STRATEGY_MEDIA) {
            audio_io_handle_t fxOutput = selectOutputForEffects(dstOutputs);
            SortedVector<audio_io_handle_t> moved;
            for (size_t i = 0; i < mEffects.size(); i++) {
                sp<EffectDescriptor> effectDesc = mEffects.valueAt(i);
                if (effectDesc->mSession == AUDIO_SESSION_OUTPUT_MIX &&
                        effectDesc->mIo != fxOutput) {
                    if (moved.indexOf(effectDesc->mIo) < 0) {
                        mpClientInterface->moveEffects(AUDIO_SESSION_OUTPUT_MIX, effectDesc->mIo,
                                                       fxOutput);
                        moved.add(effectDesc->mIo);
                    }
                    effectDesc->mIo = fxOutput;
                }
            }
        }
        // Move tracks associated to this strategy from previous output to new output
        for (int i = 0; i < AUDIO_STREAM_CNT; i++) {
            if (i == AUDIO_STREAM_PATCH) {
                continue;
            }
            if (getStrategy((audio_stream_type_t)i) == strategy) {
                /*
                 * 把对应的Track设置为invalidate状态即可,
                 * App写AudioTrack时发现它是invalidate状态,
                 * 就会重新创建新的Track
                 */
                mpClientInterface->invalidateStream((audio_stream_type_t)i);
            }
        }
    }
}

The final operation invalidateStream is AudioFlinger's invalidateStream operation, the code implementation is as follows:

status_t AudioFlinger::invalidateStream(audio_stream_type_t stream)
{
    Mutex::Autolock _l(mLock);
    for (size_t i = 0; i < mPlaybackThreads.size(); i++) {
        PlaybackThread *thread = mPlaybackThreads.valueAt(i).get();
        thread->invalidateTracks(stream);
    }
    return NO_ERROR;
}

The thread's invalidateTracks code is implemented as follows:

void AudioFlinger::PlaybackThread::invalidateTracks(audio_stream_type_t streamType)
{
    Mutex::Autolock _l(mLock);
    size_t size = mTracks.size();
    for (size_t i = 0; i < size; i++) {
        sp<Track> t = mTracks[i];
        if (t->streamType() == streamType) {
            t->invalidate();
        }
    }
}

The invalidate code of Track is implemented as follows:

void AudioFlinger::PlaybackThread::Track::invalidate()
{
    // FIXME should use proxy, and needs work
    audio_track_cblk_t* cblk = mCblk;
    //设置标志位
    android_atomic_or(CBLK_INVALID, &cblk->mFlags);
    android_atomic_release_store(0x40000000, &cblk->mFutex);
    // client is not in server, so FUTEX_WAKE is needed instead of FUTEX_WAKE_PRIVATE
    (void) syscall(__NR_futex, &cblk->mFutex, FUTEX_WAKE, INT_MAX);
    mIsInvalid = true;
}

In fact, this operation android_atomic_or (CBLK_INVALID, & cblk-> mFlags); is to set a flag bit, then after setting the flag bit, you can detect the change of the flag bit after rewriting the data in the APP, and the corresponding operation will be performed. Next, we will use APP's AudioTrack to write data as an opportunity for analysis.

2.3 Data writing after switching

According to the previous analysis, the write method of the AudioTrack in the Java layer must be called to the ObtainBuffer method of the AudioTrack in the Native layer. Therefore, the analysis starts directly from the ObtainBuffer method of the AudioTrack. The code is as follows:

status_t AudioTrack::obtainBuffer(Buffer* audioBuffer, const struct timespec *requested,
        struct timespec *elapsed, size_t *nonContig)
{
        //...
        do {
            //...
            newSequence = mSequence;
            // did previous obtainBuffer() fail due to media server death or voluntary invalidation?
            if (status == DEAD_OBJECT) {
                // re-create track, unless someone else has already done so
                if (newSequence == oldSequence) {
                    status = restoreTrack_l("obtainBuffer");
                    //...
                }
            }
        //...
        status = proxy->obtainBuffer(&buffer, requested, elapsed);

    } while ((status == DEAD_OBJECT) && (tryCounter-- > 0));
    //...
    return status;
}

@ 1 restoreTrack_l () analysis

The code of the restoreTrack_l () function here is implemented as follows:

status_t AudioTrack::restoreTrack_l(const char *from)
{
    //...
    result = createTrack_l();
    //...
    return result;
}

According to the analysis in the previous chapter, createTrack_l recreates the Track here.

@ 2 proxy's obtainBuffer analysis

Here we focus on analyzing the realization of the proxy's obtainBuffer, the code is as follows:

status_t ClientProxy::obtainBuffer(Buffer* buffer, const struct timespec *requested,
        struct timespec *elapsed)
{
    //...
    for (;;) {
        int32_t flags = android_atomic_and(~CBLK_INTERRUPT, &cblk->mFlags);
        // check for track invalidation by server, or server death detection
        if (flags & CBLK_INVALID) {
            ALOGV("Track invalidated");
            status = DEAD_OBJECT;//被设置成CBLK_INVALID
            goto end;
        }
        //...
    }
end:
    //...
    return status;
}

Here, the returned Status will be set to DEAD_OBJECT according to the identification bit CBLK_INVALID.

A brief summary: during the process of writing data, the old track was killed and a new track was created.

 

Published 289 original articles · praised 47 · 30,000+ views

Guess you like

Origin blog.csdn.net/vviccc/article/details/105409403