The study concluded the Android system audio framework

Audio framework

  1. Audio framework consisting of :
    A) application layer: application vendors to write according to the specific needs of attacks on various audio processing APK
    b) framework layer: When used for the development of audio-related products java class
    c) JNI layer: shielding a call to the local framework Audio detail, corresponding to a Java interface local transit
    d) libraries layers:
    I Client part: the JNI layer invoke the corresponding native implementation, by interacting with the binder Server;.
    II Server parts: service system, the Android audio system the core portion;
    E) HAL layer: audio hardware abstraction layer, different types of audio devices corresponding to the HAL layer.
    f) tinyalsa: call audio package to the user mode driver framework ALSA a streamlined interface library.
  2. AudioFlinger : audio system core services, is the central audio system Android is the main executor of each function. It is also a system service, provides access to the upper interface and interact directly with the HAL (hardware audio abstraction layer). Audio policy enforcer, manage different opening and closing HAL layers, different AudioTrack data processing, including mixing, resampling, sound volume control.
  3. AudioPolicyService : audio system with regard to essential services strategy, the Android audio policy makers. According to guide the user to configure the device HAL AudioFlinger load support, to develop priority policy choice of equipment for different stream types.
  4. MediaPlayerService : multimedia systems essential services;
  5. CameraService : about camera / photo of essential services;
  6. AudioTrack two data loading mode :
    A) MODE_STREAM: In this mode, the audio again and again by the write data is written AudioTrack, which usually invoked by write and write to the file system data is similar, but this way of working every time required to copy data from user-supplied into Buffer Buffer AudioTrack inside, which to some extent would cause the delay;
    B) MODE_STATIC: in this mode, just need to play before all of the data by one write AudioTrack calls to the internal buffer, the subsequent data do not have to pass up. This applies to lose like this ringtone smaller footprint, higher latency required documentation. But it also has a shortcoming, which is a write data can not be too much, otherwise the system can not allocate enough memory to store all the data.
  7. Shared Memory :
    A) memory space of each process is 4GB, 4GB is determined by the length of the pointer, if the pointer is 32 bits long, the maximum number of the address is 0xFFFFFFFFF, for the 4GB;
    said b) above is the process memory space the virtual address space used in the application is actually a pointer to point to the virtual address space.
  8. Step Linux platform for creating and sharing memory :
    A) A process to create and open a file, get a file descriptor Id;
    b) mapped into memory-mapped file by calling mmap fd, specify certain parameters in the call, create mmap inter-process shared memory;
    c) process B open the same file, but also get a file descriptor, so a and B will open the same file;
    d) process B also expressed a desire to use shared memory with mmap to invoke the specified parameters, and passing the open fd. A and B to this by opening the same file structure and memory-mapped to achieve a shared memory between processes.
  9. The following results were obtained after MemoryHeapBase configuration :
    A) Mbase starting position to a shared variable memory;
    B) Msize is required memory size allocated;
    C) ashmem_create_region MFD is the file descriptor returned;
  10. AudioTrack used AudioTrack objects Native in JNI layer, summarize call Native subject process :
    A) new new a AudioTrack, using non-parametric constructor;
    B) call set function, the parameters of Java layer pass in, and also provided a audiocallback function;
    C) AudioTrack a call start function;
    D) AudioTrack write function call;
    after e) work completes, the STOP;
    Delete F) is the last object Native;
  11. flowControlFlag summary :
    A) is for audio output, flowControlFlag corresponds underrun state, underrun state is the speed of data producers not keep pace with consumer usage data. Consumers here refers to the audio output device. Since the audio output device using a ring buffer management mode, when the producer did not provide new data, the data output device buffer will be reused, which would hear a sound repeated. This phenomenon is generally referred to as "machinegun". In this case, the general approach is to suspend output, such as the data is ready to restore output.
    b) for the corresponding audio input, flowControlFlag corresponding to the overrun state, as it means the Underrun, but producers have become an audio input device, and consumers become AudioRecord Audio system.
  12. AudioTrack two data input methods :
    A) the Push mode: the user is actively calling write write data, the data is equivalent to push AudioTrack.
    b) Pull mode: AudioTrackThread will use this callback function to EVENT_MORE_DATA parameters, that pull data from the active user. ToneGenerator AudioTrack provides data in this way.
  13. mode write operation : through shared memory to transfer data, coordination producers and consumers by controlling the pace of construction.
  14. AT and AF interaction process :
    A) AT call createTrack, get the first IAudioTrack objects;
    b) AT call start IAudioTrack object, expressed its readiness to write the data;
    c) AT write data by write, and audio_track_cblk_t this process closely related;
    d) Finally, AT call IAudioTrack of deleteIAudioTrack stop or end of the work;
  15. Android used here Proxy mode, i.e. TrackHandle agent is Track, Track Binder is not based on the communication, it can not receive a request from a remote process. Binder can TrackHandle based communication, it may receive a request from a remote process, and can call the function corresponding to Track.
  16. PlaybackThread : Playback threads for audio output; it has a member variable mOutput, as AudioStreamOutput * type, which indicates PlaybackThread direct and Audio audio output device to establish a connection;
  17. RecordThread : Recording thread for audio input;
  18. MixerThread : mixing thread, it audio data from multiple sources in the output of the mixer;
  19. DirectoutputThread : direct output thread, it will select one audio data stream output directly, the absence of the mix, so that you can reduce a lot of delay;
  20. DuplicatingThread : Multiple Output phenomenon, which is derived from MixerThread, meaning that it can mix. It will mix the final data is written to multiple outputs, that is, the data will be a multiple recipients.
  21. MixerThread thread function substantially workflow :
    A) if the notification information or configuration request, to complete these tasks. For example some of the information notifies the AF listener or volume control, switching, etc. The sound equipment configuration request;
    B) prepareTracks_l function call, checking whether the data has been active Tracks ready;
    C) calls the process of mixer object mAudioMixer and pass a buffer to store the result data, the result of this mixing is stored in the buffer;
    D) call write AudioOutputStream object representing audio device, the result data written to the device.
  22. AF process using :
    A) calls to see if there framesReady readable data;
    B) the starting position of the read data, the basic call and the above buffer, are obtained according to the first read data block and offset serverBase address;
    c) call stepServer updated reading position;
  23. The system has two core processors, a core application is processed, called the AP, may use it as a CPU on the desktop, you can run the operating system in it. Another related and mobile communications, commonly called BP.
  24. BP and AP can transmit data to the audio DSP, which channel interfering peaks in hardware. So it was a problem that if two P simultaneously transmit data to the DSP, but not coordinated with each other, then the situation may call sounds and music appears mixed.
  25. AudioManager.java
    Method: setStreamVolume(intstreamType,intindex,intflags)
    action: the volume directly.
    streamType, specify the type of sound

STREAM_ALARM, phone alarm, STREAM_MUSIC: mobile music
Stream_RING: telephone ringing, STREAM_SYSTEAM: phone system
STREAM_DTMF: tone, STREAM_NOTIFICATION: prompted STREAM_VOICE_CALL: voice telephone
index, turn up or turn down the volume
flags, optional flags, such as AudioManager. FLAG_SHOW_UI, a progress bar, AudioManager.PLAY_SOUND: play a sound.

  1. AudioService : Audio Systems in java layer basically does not participate in the data stream, AudioService This system service contains or uses almost all audio-related content, so that AUdioService stronghold Audio Systems in Java layer; AudioManager has Bp end AudioService of ( Microarray), is AudioService a proxy client, the request almost all customers AudioManager performed, eventually referred AudioService implemented; AudioService function dependent AudioSystem class, AudioSystem can not be instantiated, which is a Java layer to the proxy native layer, AudioService AudioPolicy and communicates with it through AudioFlinger;
  2. RemoteException , communication abnormality;
  3. Methods: adjustSuggestedStreamVolume(intdirection,intsuggestedStreamType,intflags)
    volume adjustment "Recommended flow": the role of
  4. Methods: setMasterMute(booleanmute,intflags)
    Role: mute
Published 39 original articles · won praise 13 · views 10000 +

Guess you like

Origin blog.csdn.net/qq_43443900/article/details/103238117