Android Framework audio subsystem (04) AudioFlinger start analysis

This series of articles Master link: Thematic sub-directory Android Framework Class Audio Subsystem


Summary and description of key points in this chapter:

This chapter mainly focuses on ➕ The upper left part of the audio map above the AudioFlinger can start the analysis part. It mainly analyzes the startup of AudioFlinger, which involves the start of AudioFlinger to the end of the execution of the Audio constructor, the analysis of AudioFlinger's loadHwModule (HAL operation), the implementation of AudioFlinger's openOutput (creating MixerThread), and at the same time it undertakes part of the process analysis in the previous section.


1 From the start of AudioFlinger to the end of Audio constructor execution

The start of AudioFlinger is implemented in Main_mediaserver.cpp, the code is as follows:

int main(int argc __unused, char** argv)
{
    signal(SIGPIPE, SIG_IGN);
    char value[PROPERTY_VALUE_MAX];
    //...
    sp<ProcessState> proc(ProcessState::self());
    sp<IServiceManager> sm = defaultServiceManager();
    //...
    AudioFlinger::instantiate();//AudioFlinger
    MediaPlayerService::instantiate();
    CameraService::instantiate();
    AudioPolicyService::instantiate();//AudioPolicyService
    SoundTriggerHwService::instantiate();
    registerExtensions();
    ProcessState::self()->startThreadPool();
    IPCThreadState::self()->joinThreadPool();
	//...
}

Here we focus on analyzing the implementation of AudioFlinger :: instantiate (), the code is as follows:

static void instantiate() { publish(); }

Continue to analyze publish, the code is as follows:

    static status_t publish(bool allowIsolated = false) {
        sp<IServiceManager> sm(defaultServiceManager());
        return sm->addService(//注册服务到servicemanager中
                String16(SERVICE::getServiceName()),
                new SERVICE(), allowIsolated);
    }

Here the service is registered in service_manager. At the same time, an AudioFlinger is created here, that is, a constructor that executes AudioFlinger, the code is as follows:

AudioFlinger::AudioFlinger()
    : BnAudioFlinger(),
      mPrimaryHardwareDev(NULL),
      //...
      mGlobalEffectEnableTime(0),
      mPrimaryOutputSampleRate(0)
{
    getpid_cached = getpid();
    char value[PROPERTY_VALUE_MAX];
	//...
}

Mainly the initialization of some variables, and then mainly look at the implementation of onFirstRef, the code is as follows:

void AudioFlinger::onFirstRef()
{
    int rc = 0;
    Mutex::Autolock _l(mLock);

    /* TODO: move all this work into an Init() function */
    char val_str[PROPERTY_VALUE_MAX] = { 0 };
    if (property_get("ro.audio.flinger_standbytime_ms", val_str, NULL) >= 0) {
        uint32_t int_val;
        if (1 == sscanf(val_str, "%u", &int_val)) {
            mStandbyTimeInNsecs = milliseconds(int_val);

        } else {
            mStandbyTimeInNsecs = kDefaultStandbyTimeInNsecs;
        }
    }
	//这里将AudioFlinger传递给PatchPanel,其他并没有做什么
    mPatchPanel = new PatchPanel(this);
    mMode = AUDIO_MODE_NORMAL;
}

After the above analysis, we can know that AudioFlinger is still passive after registering the service, and will not actively create threads and perform operations. After that, it has been waiting to be called. This is based on the analysis of loadHwModule and openOutput in the previous chapter.


2 AudioFlinger loadHwModule analysis

Continue to the previous chapter here af-> loadHwModule (name); For analysis, AudioFlinger's loadHwModle code is implemented as follows:

audio_module_handle_t AudioFlinger::loadHwModule(const char *name)
{
    if (name == NULL) {
        return 0;
    }
    if (!settingsAllowed()) {
        return 0;
    }
    Mutex::Autolock _l(mLock);
    return loadHwModule_l(name);
}

Continue to analyze loadHwModule_l here, the code implementation is as follows:

// loadHwModule_l() must be called with AudioFlinger::mLock held
audio_module_handle_t AudioFlinger::loadHwModule_l(const char *name)
{
    //是否已经加载过这个interface
    for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
        if (strncmp(mAudioHwDevs.valueAt(i)->moduleName(), name, strlen(name)) == 0) {
            ALOGW("loadHwModule() module %s already loaded", name);
            return mAudioHwDevs.keyAt(i);
        }
    }

    audio_hw_device_t *dev;
    //关键点:打开audio.primary.XXX.so,构造audio_hw_device
    int rc = load_audio_interface(name, &dev);
    //...
    mHardwareStatus = AUDIO_HW_INIT;
    //初始化
    rc = dev->init_check(dev);
    mHardwareStatus = AUDIO_HW_IDLE;
    AudioHwDevice::Flags flags = static_cast<AudioHwDevice::Flags>(0);
    //...
    audio_module_handle_t handle = nextUniqueId();
    //通过dev构建AudioHwDevice,将AudioHwDevice加入到mAudioHwDevs中
    mAudioHwDevs.add(handle, new AudioHwDevice(handle, name, dev, flags));

    return handle;
}

Continue to analyze load_audio_interface here, the code implementation is as follows:

static int load_audio_interface(const char *if_name, audio_hw_device_t **dev)
{
    const hw_module_t *mod;
    int rc;

    //通过hw_get_module_by_class->load->dlopen->dlsym->dlclose)
    //获得第三方厂家库的hw_module_t结构体指针 &mod
    rc = hw_get_module_by_class(AUDIO_HARDWARE_MODULE_ID, if_name, &mod);
    //...
    //通过&mod构建audio_hw_device_t类型结构体 dev
    rc = audio_hw_device_open(mod, dev);
    //...
}

Here is a summary of the package of loadHwModule to the hardware:

  • AudioFlinger: AudioHwDevice (put in mAudioHwDevs array)
  • audio_hw_hal.cpp: audio_hw_device
  • Manufacturer: AudioHardware (derived from: AudioHardwareInterface)
  • AudioHwDevice: Encapsulation of audio_hw_device,
  • audio_hw_device: The implementation of the function is through the AudioHardware class object

3 AudioFlinger's openOutput implementation

Continue to the previous chapter here af-> openOutput (module, output, config, devices, address, latencyMs, flags); For analysis, the OpenOutput code of AudioFlinger is implemented as follows:

status_t AudioFlinger::openOutput(audio_module_handle_t module,
                                  audio_io_handle_t *output,
                                  audio_config_t *config,
                                  audio_devices_t *devices,
                                  const String8& address,
                                  uint32_t *latencyMs,
                                  audio_output_flags_t flags)
{
    //...
    Mutex::Autolock _l(mLock);
    //关键点
    sp<PlaybackThread> thread = openOutput_l(module, output, config, *devices, address, flags);
    if (thread != 0) {
        *latencyMs = thread->latency();

        // notify client processes of the new output creation
        thread->audioConfigChanged(AudioSystem::OUTPUT_OPENED);

        // the first primary output opened designates the primary hw device
        if ((mPrimaryHardwareDev == NULL) && (flags & AUDIO_OUTPUT_FLAG_PRIMARY)) {
            ALOGI("Using module %d has the primary audio interface", module);
            mPrimaryHardwareDev = thread->getOutput()->audioHwDev;

            AutoMutex lock(mHardwareLock);
            mHardwareStatus = AUDIO_HW_SET_MODE;
            mPrimaryHardwareDev->hwDevice()->set_mode(mPrimaryHardwareDev->hwDevice(), mMode);
            mHardwareStatus = AUDIO_HW_IDLE;

            mPrimaryOutputSampleRate = config->sample_rate;
        }
        return NO_ERROR;
    }
    return NO_INIT;
}

Here focus on the implementation of openOutput_l, the code is as follows:

sp<AudioFlinger::PlaybackThread> AudioFlinger::openOutput_l(audio_module_handle_t module,
                                                            audio_io_handle_t *output,
                                                            audio_config_t *config,
                                                            audio_devices_t devices,
                                                            const String8& address,
                                                            audio_output_flags_t flags)
{
    //查找相应的audio interface
    AudioHwDevice *outHwDev = findSuitableHwDev_l(module, devices);
    //...
    audio_hw_device_t *hwDevHal = outHwDev->hwDevice();
    if (*output == AUDIO_IO_HANDLE_NONE) {
        *output = nextUniqueId();
    }

    mHardwareStatus = AUDIO_HW_OUTPUT_OPEN;
    audio_stream_out_t *outStream = NULL;
    //对于每个module中每一个output profile,
    //如果flags不是特殊的则执行open_output_stream
    status_t status = hwDevHal->open_output_stream(hwDevHal,
                                                   *output,
                                                   devices,
                                                   flags,
                                                   config,
                                                   &outStream,
                                                   address.string());

    mHardwareStatus = AUDIO_HW_IDLE;

    //创建playbackThread
    if (status == NO_ERROR && outStream != NULL) {
        AudioStreamOut *outputStream = new AudioStreamOut(outHwDev, outStream, flags);

        PlaybackThread *thread;
        if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
            thread = new OffloadThread(this, outputStream, *output, devices);
        } else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)
                || !isValidPcmSinkFormat(config->format)
                || !isValidPcmSinkChannelMask(config->channel_mask)) {
            thread = new DirectOutputThread(this, outputStream, *output, devices);
        } else {
            //一般是创建混音线程,代表AudioStreamOut对象的output也传递进去了
            thread = new MixerThread(this, outputStream, *output, devices);
        }
        //将创建的线程添加到播放线程列表 mPlaybackThreads中
        mPlaybackThreads.add(*output, thread);
        return thread;
    }
    return 0;
}

The key here is to create a MixerThread. Each thread here corresponds to an output.

Published 289 original articles · praised 47 · 30,000+ views

Guess you like

Origin blog.csdn.net/vviccc/article/details/105275199