Android Audio (6) - Audio Systems Analysis

A, AudioPolicyService boot process analysis

1. Play a sound, the sound playback device from which it is decided by the audio policy.

2. In each of the sound, corresponding to an output, an output corresponding to the system has a thread corresponding thereto.

3. hardware access operation is to be completed by the AudioFlinger

4.AudioPolicyService when you start to read will parse the configuration file /system/etc/audio_policy.conf to operate AudioFlinger the configuration file to open output, create a thread.

Profiles on 5.tiny4412

# CAT / System / etc / audio_policy.conf 

global_configuration { 
  attached_output_devices AUDIO_DEVICE_OUT_SPEAKER 
  default_output_device AUDIO_DEVICE_OUT_SPEAKER 
  attached_input_devices AUDIO_DEVICE_IN_BUILTIN_MIC 
} 

audio_hw_modules { // Note here modules, each of which represents a module 
  Primary {     // This is a module, a module for a a .so file provided by the manufacturer 
    Outputs { // Note that this is Outputs, inside each of which is an output 
      Primary { // this is the output, which parameters are output to this configuration, the format "name value" 
        sampling_rates 44100 
        channel_masks AUDIO_CHANNEL_OUT_STEREO 
        Formats AUDIO_FORMAT_PCM_16_BIT 
        Devices AUDIO_DEVICE_OUT_SPEAKER|AUDIO_DEVICE_OUT_EARPIECE|AUDIO_DEVICE_OUT_WIRED_HEADSET|AUDIO_DEVICE_OUT_WIRED_HEADPHONE|AUDIO_DEVICE_OUT_ALL_SCO|AUDIO_DEVICE_OUT_AUX_DIGITAL
        flags AUDIO_OUTPUT_FLAG_PRIMARY
      }
    }
    inputs {
      primary {
        sampling_rates 8000|11025|12000|16000|22050|24000|32000|44100|48000
        channel_masks AUDIO_CHANNEL_IN_MONO|AUDIO_CHANNEL_IN_STEREO
        formats AUDIO_FORMAT_PCM_16_BIT
        devices AUDIO_DEVICE_IN_BUILTIN_MIC|AUDIO_DEVICE_IN_WIRED_HEADSET|AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET|AUDIO_DEVICE_IN_AUX_DIGITAL|AUDIO_DEVICE_IN_VOICE_CALL
      }
    }
  }
}

 

6.AudioPolicyService boot process analysis

a. loading or parsing /vendor/etc/audio_policy.conf / System / etc / audio_policy.conf 
   to the configuration file entry for each module, new HwModule (name), placed mHwModules array 
   for each module in a output, new IOProfile, the mOutputProfiles into the module 
   for the module in every input, new IOProfile, into the module's mInputProfiles 
b. according to the file so the name load module provided by the manufacturer (to load by AudioFlinger) is very important here! ! ! 
c. Open the corresponding output (by AudioFlinger to open output)

7.HwModule a member of the primary audio_hw_modules from the configuration file in

8.instantiate () for registration services

main () // Framework / main_mediaserver.cpp 
    AudioPolicyService :: the instantiate (); // Framework / main_mediaserver.cpp 
that the instantiate () implemented in the class BinderService for add Service.

 

Two, AudioFlinger process analysis start

Reference 002 UML

1. Load .so file in / system / lib / hw

2 AudioFlinger boot process analysis

a. Registration AudioFlinger service 
b. AudioPolicyService be invoked so to open the file provided by the manufacturer 
b. 1 which is so loaded file? What file name is? file name come from?  
    name from the / System / etc / audio_policy.conf get the module named: primary, so the file so that:. audio.primary.XXX.so, eg audio.primary.tiny4412.so (from audio_hw_hal.cpp and AudioHardware.cpp) 

. b 2 of the document so by what source files? View Android.mk 
    . audio.primary $ (TARGET_DEVICE): Device / Friendly-ARM / Common / for libaudio / AudioHardware.cpp 
                                     libhardware_legacy 
    libhardware_legacy: Hardware / libhardware_legacy / Audio / audio_hw_hal.cpp                                  

    Conclusion: mainly constituted by AudioHardware.cpp and audio_hw_hal.cpp . 

b. 3Hardware package: 
    AudioFlinger.cpp: the hardware packaged into AudioHwDevice (mAudioHwDevs into the array) 
    audio_hw_hal.cpp: audio_hw_device the encapsulated hardware 
    manufacturers: the hardware packaged into AudioHardware (derived from: AudioHardwareInterface) 
    
    AudioHwDevice is the encapsulation of audio_hw_device, 
    implementation function audio_hw_device through AudioHardware class object 

c. AudioPolicyService is called to open output, create playback thread

 

Three, AudioTrack creation process

1.AudioTrack :: AudioTrack () call to create a set AudioTrack () and tied with hardware

2.Android manipulation used to describe a sound output output channels. An output corresponding to a playbackThread , information determined by AudioFilnger

3.AudioTrack creation process overview

a.1 C ++ implement test programs: 
    frameworks / Base / Media / Tests / audiotests / shared_mem_test.cpp 
    content construct a sine wave audio for themselves, has been known to play Ctrl + End C. 

A. 2 the Java implementation to the test procedures :( yet to test) 
    frameworks / Base / Media / Tests / MediaFrameworkTest / src / COM / Android / mediaframeworktest / Functional / Audio / MediaAudioTrackTest.java 

must create AudioTrack object play a sound (this AudioTrack object can be a C ++ implementation, it can be implemented in Java), 
will lead to C AudioTrack object is created when the java ++ is AudioTrack object is created; it is c ++ core analysis of the AudioTrack class involved when creating a AudioTrack important function: the SET 

. b guess the main work of the creation process 
b. 1 the SET () using the property AudioTrack to find the corresponding output according to AudioPolicy, playbackThread (a output corresponds to a playbackThread) 
b.2 to create the corresponding track in playbackThread 
B. . 3 establish a shared memory between the AudioTrack mTracks APP and playbackThread in track 

C. Source timing diagram 003 UML

 

Four, AudioTrack creation process _ Select output

1. AudioTrack creation process _ Select output

a. The APP configured AudioTrack specified stream type 
B. AudioTrack :: setAttributesFromStreamType The stream type setting properties 
. AudioPolicyManager :: getStrategyForAttr The strategy to obtain the attribute of sound (category) C 
D. AudioPolicyManager :: getDeviceForStrategy from which sound based on the category selection device inside play 
e. AudioPolicyManager :: getOutputForDevice get output according to device. 
    // there may be multiple output support for the same category of a device 
       E.1 AudioPolicyManager :: getOutputsForDevice // get this device for all output 
       E.2 output = selectOutput (the Outputs, flags , the format); // selects one output from this device in the plurality of output

2.AudioPolicyManager.mOutputs corresponds to open output, each of which contains mProfile, this mProfile from /system/etc/audio_policy.conf,mProfile specified which supports the output device.

DefaultKeyedVector<audio_io_handle_t, sp<AudioOutputDescriptor> > mOutputs;

3. There may be multiple output support a device, how removed most appropriate output from the plurality of output?

AudioPolicyManager :: selectOutput in the 
first match flag, to identify the most consistent output: 
will pass flag when a.APP create AudioTrack 
b.output corresponding profile also has a flag (from / System / etc / audio_policy.conf specified flag) , 
. C using the a, b compare the flag, output of the highest degree of matching taken. 
If the goodness of fit of the same: 
if the support Primary output device, it is selected, otherwise, take the first output.

 

Five, AudioTrack creation process _Track and shared memory

1. An output corresponding to a playback device (sound card), and corresponds to a playback thread. Audio data flow playbackThread ---> output ---> sound equipment

2. The application AudioTrack and play thread mTrack the members is one to one. Shared memory to pass data between them.

3. AudioTrack creation process and shared memory _Track
review:
. A Create the APP AudioTrack <-----------------> AudioFlinger created in PlaybackThread corresponding Track
B provides audio to the APP AudioTrack. data there are two ways: one-time, (MODE_STATIC), while playing provided (MODE_STREAM)

Q:
. A audio data exists in the buffer, the buffer Who provides APP or PlaybackThread??
(1) MODE_STATIC: If the App-time audio data is provided, then the buffer is created by the App, App more convenient because of the size of the property known buffer .
(2) MODE_STREAM: If the App is provided while playing audio data, then it is created by playbackThread, this App was easier to achieve.

. b APP provides data, PlaybackThread consumption data, how to synchronize?
(1) MODE_STATIC: a one-time offer, you do not need to sync, which is what one before and one after.
(2) MODE_STREAM: while playing side when audio data is provided, which is typical of producers and consumers, using a ring buffer to synchronize.

4.playbackThread in mTracks is a linked list, each of the above Track corresponds to a AudioTrack application. It is due App created
AudioTrack that led track on mTracks list is created.

5.App and playbackThread are in a different process, the audio data can be binder communication, but not very efficient, so use shared memory to pass data.

6. Test program shared_mem_test.cpp used is a one-time offer:

To Test01 :: AudioTrackTest () 
    IMEM = HEAP -> the allocate (* BUF_SZ the sizeof ( Short )); 
    the memcpy (P, smpBuf, BUF_SZ * the sizeof ( Short )); 
     SP <AudioTrack> = Track new new AudioTrack (AUDIO_STREAM_MUSIC, // Stream type 
               Rate, 
               AUDIO_FORMAT_PCM_16_BIT, // Word length, the PCM 
               AUDIO_CHANNEL_OUT_MONO, 
               IMEM); / * where the one-time pass on audio data * /

  For C ++ implementation, parameter iMem if not NULL would indicate that the shared memory by App created, and then will use MODE_STATIC the transmission of data; if it is NULL, indicating that App and not to create the buffer, will playbackThread to create this shared memory, then the data transmission by way MODE_STREAM;

AudioTrack it does not use a model to distinguish MODE_STATIC or MODE_STREAM, it is NULL or non-NULL parameter to distinguish iMem which way.

C ++ does not need to specify the AudioTrack class model, but Java's AudioTrack need:

MediaAudioTrackTest.java
testSetPlaybackHeadPositionTooFar()
    int TEST_MODE = AudioTrack.MODE_STREAM;
    AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT, 2*minBuffSize, TEST_MODE);

testSetPlaybackRateUninit()
    int TEST_MODE = AudioTrack.MODE_STATIC;
    AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT, minBuffSize, TEST_MODE);

7.AudioTrack constructor:

AudioTrack (AudioAttributes Attributes, the AudioFormat the format, int bufferSizeInBytes, int MODE, int sessionId) // AudioTrack.java
     // local initialization 
    native_setup ( new new WeakReference <AudioTrack> ( the this ), mAttributes, 
                mSampleRate, mChannels, mAudioFormat, 
                mNativeBufferSizeInBytes, mDataLoadMode, the session ); 
        // corresponding to the JNI function 
        android_media_AudioTrack_setup // android_media_AudioTrack.cpp
             // local AudioTrack class object, no arguments are passed 
            SP <AudioTrack> = lpTrack new newAudioTrack ();
             Case MODE_STREAM:
                 // For MODE_STREAM mode, direct call set (), no memory is allocated App process 
                lpTrack-> the SET () 
             Case MODE_STATIC:
                 // For MODE_STATIC mode, allocates buffer in the App process 
                lpJniStorage- > allocSharedMem (buffSizeInBytes) 
                lpTrack -> the SET ()

 

Sixth, the transfer of audio data

1.App side of the associated code AudioTrack.cpp, playbackTread handler in the Track.cpp.

2.mProxy for App end management of shared memory

// frameworks\av\media\libmedia\AudioTrack.cpp
AudioTrack::createTrack_l()
    // update proxy
    if (mSharedBuffer == 0) {
        mStaticProxy.clear();
        mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
    } else {
        mStaticProxy = new StaticAudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
        mProxy = mStaticProxy;
    }

mServerProxy class is used to manage the memory of playing thread

// frameworks\av\services\audioflinger\Tracks.cpp
AudioFlinger::PlaybackThread::Track::Track
    
    if (sharedBuffer == 0) {
        mAudioTrackServerProxy = new AudioTrackServerProxy(mCblk, mBuffer, frameCount,
                mFrameSize, !isExternalTrack(), sampleRate);
    } else {
        mAudioTrackServerProxy = new StaticAudioTrackServerProxy(mCblk, mBuffer, frameCount,
                mFrameSize);
    }
    mServerProxy = mAudioTrackServerProxy;

3. The audio data transfer summary

. a APP created AudioTrack, playbackThread creates a corresponding Track, audio data is transmitted through the shared memory therebetween 
b APP There are two ways to use shared memory:. 
B.1 MODE_STATIC: Create a shared memory the APP, APP-time pad data 
b. 2 MODE_STREAM: APP using obtainBuffer obtain a blank memory, using releaseBuffer release memory after stuffing data 
c playbackThread use obtainBuffer get the memory containing the data, using releaseBuffer free the memory after the usage data. 
D AudioTrack the App end containing mProxy, which is used to manage shared. memory, which contains obtainBuffer, releaseBuffer function; 
   Track playback thread end containing mServerProxy, which is used to manage the shared memory, which contains obtainBuffer, releaseBuffer function 
   for the MODE different, these Proxy point to different objects. 
e. For manner MODE_STREAM, APP and playbackThread with ring buffers to transfer data.

The ring buffer explain

. a Initial R = 0, W = 0, the length of the buff is LEN 
B to write data. 
	W = W is% LEN; 
	buff [W] = Data; 
	W is ++; 
. C reads out a data 
	R & lt = R & lt% LEN; 
	Data BUFF = [R & lt]; 
	R ++; 
Analyzing the buffer is full: R + LEN == W 
Analyzing buffer empty: R == W 

from the above it can be seen R and W has been incremented is . 

When an overflow is determined, R, and W are simultaneously reducing the size of an integer multiple of LEN, will not affect the result of improvements: LEN rounding to the n-th power of 2, LEN% W at this time may be slower this operation can be converted to w = W & (LEN-1)

 

Seven, PlaybackThread process flow

1. Sound often only one format audio data (e.g., 2-Channel, 44K-Sample, 16bit-Deep). However, App may pass down the audio data in different formats,
the data in the playbackThread resampling, the sampling format to the sound card supports. mAudioMixer object is responsible for this work. After resampling but also to each App
audio data mixed up, called the mix, but also by the mAudioMixer responsible.

2.hook members point to a different handler

// frameworks \ the AV \ Services \ audioflinger \ AudioMixer.cpp 
process__nop (): Mute the phone, without any treatment 
process__genericNoResampling (): does not require resampling 
process__genericResampling (): should be the re-sampling

3. PlaybackThread process flow

:: :: threadLoop PlaybackThread AudioFlinger () 
frameworks \ the AV \ Services \ audioflinger \ Threads.cpp 
A prepareTracks_l:. 
   OK enabled track, disabled track. For enabled track, setting parameters mState.tracks [x] in 
b threadLoop_mix:. To process data (such as resampling), the mix 
   is determined hook: Analysis-by mState.tracks [x] of data tracks is determined based on its format [X ] .hook. And then determine the total mState.hook 

   call hook: Total mState.hook can call, it will go to every call mState.tracks [x] .hook 
   
   data will be placed after the mixer and then converted mState.outputTemp temporary BUFFER after the format is stored thread.mMixerBuffer 
C memcpy_by_audio_format:. thread.mMixerBuffer copying data from or to thread.mEffectBuffer thread.mSinkBuffer 
D threadLoop_write:. thread.mSinkBuffer written on the card 
e threadLoop_exit.

4. Summary
  of the original data is stored in shared memory, after treatment resampling stored in a temporary outputTemp buffer, the data stored in the temporary buffer is re-sampled data, the data source is a plurality of Track. The final play of the data stored in the mix after mMixerBuffer this buffer. If you need to sound processing (such as strengthening bass), as well as part of the data it will be placed in mEffectBuffer. After mEffectBuffer will mMixerBuffer and audio data output to mSinkBuffer then sent to the sound card hardware will mSinkBuffer data.

 

Guess you like

Origin www.cnblogs.com/hellokitty2/p/10932313.html
Recommended