オーディオ関連の学習

A.プレイ

AVAudioPlayer


オーディオキュー:マルチチャンネル再生ストリームのサポート...
サンプルコードは、以下を参照してください SpeakHereのサンプルを。    

OpenALの:位置
サンプルコードについては、  oalTouchを


二つ。録音

デバイスが録音をサポートしているかどうかの決定
[AVAudioSession sharedInstance] inputIsAvailable]。


AVAudioRecorder

オーディオレコーダーを使用して記録するために準備するには:

  1. サウンドファイルのURLを指定してください。

  2. オーディオセッションを設定します。

  3. オーディオレコーダーの初期状態を設定します。 

オーディオキューサービス

あなたのアプリケーションは、記録オーディオキューオブジェクトをインスタンス化し、コールバック関数を提供します。コールバックの店はすぐに使用するためにメモリ内の音声データを受信または長期保存用ファイルに書き込みます。

三。iOSのオーディオユニットのサポート

オーディオユニット

説明

iPodのイコライザーユニット

タイプのiPodのEQユニット、 kAudioUnitSubType_AUiPodEQは、 アプリケーションで使用できる簡単な、プリセットベースのイコライザーが用意されています。このオーディオユニットを使用する方法のデモでは、サンプルコードプロジェクトを参照 iPhoneMixerEQGraphTestを 

3Dミキサーユニット

タイプの3Dミキサーユニット、 kAudioUnitSubType_AU3DMixerEmbeddedは、 あなたが、複数のオーディオストリームをミックスステレオ出力パンを指定して、再生速度を操作して、より多くのことができます。OpenALのは、このオーディオユニットの上に構築され、ゲームアプリケーションに適してより高いレベルのAPIを提供しています。

マルチチャンネルミキサーユニット

The Multichannel Mixer unit, of type kAudioUnitSubType_- MultiChannelMixer, lets you mix multiple mono or stereo audio streams to a single stereo stream. It also supports left/right panning for each input. For a demonstration of how to use this audio unit, see the sample code project Audio Mixer (MixerHost) .

Remote I/O unit

The Remote I/O unit, of type kAudioUnitSubType_RemoteIO, connects to audio input and output hardware and supports realtime I/O. For demonstrations of how to use this audio unit, see the sample code project aurioTouch .

Voice Processing I/O unit

The Voice Processing I/O unit, of type kAudioUnitSubType_- VoiceProcessingIO, has the characteristics of the I/O unit and adds echo suppression and other features for two-way communication.

Generic Output unit

The Generic Output unit, of type kAudioUnitSubType_- GenericOutput, supports converting to and from linear PCM format; can be used to start and stop a graph.

   

Converter unit

The Converter unit, of type kAudioUnitSubType_AUConverter, lets you convert audio data from one format to another. You typically obtain the features of this audio unit by using the Remote I/O unit, which incorporates a Converter unit.

For more information on using system audio units, see  Audio Unit Hosting Guide for iOS

The iOS Dev Center provides two sample-code projects that demonstrate use of system audio units:  aurioTouch  and  iPhoneMultichannelMixerTest  .

Tips for Using Audio

Tip

Action

Use compressed audio appropriately

For AAC, MP3, and ALAC (Apple Lossless) audio, decoding can take place using hardware-assisted codecs. While efficient, this is limited to one audio stream at a time. If you need to play multiple sounds simultaneously, store those sounds using the IMA4 (compressed) or linear PCM (uncompressed) format.

Convert to the data format and file format you need

The afconvert tool in Mac OS X lets you convert to a wide range of audio data formats and file types. See “Preferred Audio Formats in iOS” (page 28) and the afconvert man page.

Evaluate audio memory issues

When playing sound with Audio Queue Services, you write a callback that sends short segments of audio data to audio queue buffers. In some cases, loading an entire sound file to memory for playback, which minimizes disk access, is best. In other cases, loading just enough data at a time to keep the buffers full is best. Test and evaluate which strategy works best for your application.

   

Reduce audio file sizes by limiting sample rates, bit depths, and channels

Sample rate and the number of bits per sample have a direct impact on the size of your audio files. If you need to play many such sounds, or long-duration sounds, consider reducing these values to reduce the memory footprint of the audio data. For example, rather than use 44.1 kHz sampling rate for sound effects, you could use a 32 kHz (or possibly lower) sample rate and still provide reasonable quality.

Using monophonic (single-channel) audio instead of stereo (two channel) reduces file size. For each sound asset, consider whether mono could suit your needs.

Pick the appropriate technology

Use OpenAL when you want a convenient, high-level interface for positioning sounds in a stereo field or when you need low latency playback. To parse audio packets from a file or a network stream, use Audio File Stream Services. For simple playback of single or multiple sounds, use the AVAudioPlayer class. For recording to a file, use the AVAudioRecorder class. For audio chat, use the Voice Processing I/O unit. To play audio resources synced from a user’s iTunes library, use iPod Library Access. When your sole audio need is to play alerts and user-interface sound effects, use Core Audio’s System Sound Services. For other audio applications, including playback of streamed audio, precise synchronization, and access to packets of incoming audio, use Audio Queue Services.


Code for low latency





For the lowest possible playback latency, use OpenAL or use the I/O unit directly.



四. Audio Session
AddMusic  , a sample code project that demonstrates use of an audio session object in the context of a playback application. This sample also demonstrates coordination between application audio and iPod audio.



For application sound to play, or for recording to work, your audio session must be active

you can change the category as often as you need to, and can do so whether your session is active or inactive.

An audio session comes with some default behavior. Specifically:

  • Playback is enabled and recording is disabled.
  • When the user moves the Silent switch (or Ring/Silent switch on iPhone) to the “silent” position, your audio is silenced.
  • When the user presses the Sleep/Wake button to lock the screen, or when the Auto-Lock period expires, your audio is silenced.
  • When your audio starts, other audio on the device—such as iPod audio that was already playing—is silenced.


For playback that continues when the screen is locked, use the  AVAudioSessionCategoryPlayback

To ensure that iPod music is not interrupted, configure your audio session to allow mixing. Use the  AVAudioSessionCategoryAmbient

AVAudioSession 注意事项

  • A VoIP app, which spends most of its time running in the background, should ensure that its audio session is active only while the app is handling a call. In the background, standing ready to receive a call, a VoIP app’s audio session should not be active.
  • An app using the “recording” category should ensure that its audio session is active only while recording. Before recording starts and when recording stops, ensure your session is inactive to allow other sounds, such as incoming message alerts, to play. Alternatively, if your recording app also provides playback, you could switch to a playback category when not recording. This also allows alert sounds to play.

The system deactivates your audio session for a Clock or Calendar alarm or incoming phone call. :  Interruptions happen when a competing audio session from a built-in application activates and that session is not categorized by the system to mix with yours.

 


iOS has six audio session categories: (被系统影响或影响别的app)
  • Three for playback
  • One for recording
  • One that supports playback and recording—that need not occur simultaneously
  • One for offline audio processing
factory:  the Ring/Silent switch , the screen locks , other audio play or record

How interruptions work
  • AV Foundation framework : The system automatically pauses playback or recording upon interruption, and reactivates your audio session when you resume playback or recording.
  •  Audio Queue Services, I/O audio unit : You are responsible for saving playback or recording position and reactivating your audio session after interruption ends
  • OpenAL: implement the AVAudioSession interruption delegate methods or write an interruption listener callback function—as when using Audio Queue Services. However, the delegate or callback must additionally manage the OpenAL context. 

Route Changed 
One of the audio hardware route change reasons in iOS is kAudioSessionRouteChangeReason_CategoryChange. In other words, a change in audio session category is considered by the system—in this context—to be a route change, and will invoke a route change property listener callback.

factory:  plugs in or unplugs a headset, or docks or undocks the device, or  Bluetooth device connects or disconnects

To set hardware preferences, use the  AudioSessionSetProperty  function along with the  kAudioSessionProperty_PreferredHardwareSampleRate  kAudioSessionProperty_PreferredHardwareIOBufferDuration 
 Although you can safely specify hardware preferences at any time after the audio session is initialized, best practice is to do so while the session is inactive. After you establish hardware preferences, activate the audio session and then query it to obtain the actual values. This final step is important because the system may not be able to provide what you ask for.

 To obtain meaningful values for hardware characteristics, ensure that the audio session is initialized and active before you issue queries.

 
kAudioSessionProperty_AudioInputAvailable
Whether audio input is available on the device. Use this property to determine if audio recording is possible.

Working with Movie Players
iOS3.1.3以前では、ムービープレーヤーは常にシステム提供のiOS 3.2以降では、同じ動作を得るsession.Toオーディオ、あなたが設定する必要があり、ムービープレーヤーの使用  useApplicationAudioSessionの  にプロパティ値  NOを


あなたは、独自のオーディオセッションを使用するには、ムービープレーヤーを設定している場合は、実行するためのいくつかのクリーンアップがあります。映画が終了、またはユーザーがそれを退けた後、オーディオを再生する能力を取り戻すために、順番に、これらの2つの手順を実行します。

  1. 後でもう一度同じムービーを再生する場合は、ムービーでもプレイヤー-廃棄してください。

  2. お使いのオーディオセッションをアクティブにします。 



最小の消費電力の最適化
、見つけインスタンス化、およびオーディオユニットを設定する方法の例については、参照  iPhoneMultichannelMixerTestの  サンプルを


#if TARGET_IPHONE_SIMULATOR
#warning *** Simulator mode: audio session code works only on a device
    // Execute subset of code that works in the Simulator
#else
    // Execute device-only code as well as the other code 
#endifの

ます。https://my.oschina.net/dake/blog/196792で再現

おすすめ

転載: blog.csdn.net/weixin_34289744/article/details/91586404