Real-time recording and playback of iOS audio

Requirements: Recently, the company needs to do a building intercom function: the outdoor station (connected to WIFI) dials to call the indoor unit (corresponding to WIFI), and the indoor unit will forward the received data through UDP broadcast after receiving the call , After the mobile phone (connected to the corresponding WIFI) receives the video stream, it will display the video data in real time (the mobile phone can answer, hang up, after the mobile phone answers, the indoor unit will not display the video, but just forward it.)

To put it simply, the mobile phone client needs to make a software similar to the live broadcast platform, which can display the video in real time, play the received sound data in real time, and send the sound received by the microphone of the mobile phone back to the indoor unit in real time. The indoor unit is responsible for Forward to the door station. 

This article introduces how iOS performs real-time recording and playback of received sound data 

If you want to use the framework of the system to play sound and recording data in real time, you need to know the audio queue service,

The audio queue service in the AudioToolbox framework, it can completely do audio playback and recording,

An audio service queue consists of three parts:

1. Three buffers Buffers: Each buffer is a temporary warehouse for storing audio data.
2. A buffer queue Buffer Queue: an ordered queue containing audio buffers.
3. A callback CallBack: a custom queue callback function.

 How it works depends on Baidu!

My simple understanding:

For playback: the system will automatically take out the data in each buffer from the buffer queue for playback. What we need to do is to put the received data into the buffer in a circular manner, and leave the rest to the system for implementation. .

For recording: the system will automatically put the recorded sound into each buffer in the queue, all we need to do is to convert the data into our own data from the callback function and it will be OK.

#pragma mark--play in real time

1. Import the system framework AudioToolbox.framework AVFoundation.framework

2. Obtain the microphone permission, and add Privacy - Microphone Usage Description to the project's Info.plist file. Description: App wants to access your microphone

3. Create a class EYAudio that plays sound 

EYAudio.h

#import <Foundation/Foundation.h> 

@interface EYAudio : NSObject 

// Played data stream data 
- (void)playWithData:(NSData *)data; 
// You can reset it when there is a problem with sound playback 
- (void)resetPlay ; 
// Stop playing 
- (void)stop; 

@end

EYAudio.m

 #import "EYAudio.h"

#import <AVFoundation/AVFoundation.h> 
#import <AudioToolbox/AudioToolbox.h> 

#define MIN_SIZE_PER_FRAME 1920 //The size of each package, the indoor unit requirement is 960, see the configuration information below for details 
#define QUEUE_BUFFER_SIZE 3 //Buffer Number 
#define SAMPLE_RATE 16000 //Sampling frequency 

@interface EYAudio(){ 
  AudioQueueRef audioQueue; //Audio playback queue 
  AudioStreamBasicDescription _audioDescription; 
  AudioQueueBufferRef audioQueueBuffers[QUEUE_BUFFER_SIZE]; //Audio cache 
  BOOL audioQueueBufferSIUUE judges whether the audioBuffer is in Queue[UFF Use 
  NSLock *sysnLock; 
  NSMutableData *tempData; 
  OSStatus osState; 
} 
@end 

@implementation EYAudio 
{
 
#pragma mark - set AVAudioSessionCategoryMultiRoute playback and recording in advance
+ (void)initialize 
  if (audioQueue != nil) {
  NSError *error = nil; 
  //Only want to play: AVAudioSessionCategoryPlayback 
  //Only want to record: AVAudioSessionCategoryRecord 
  //Want to "play and record" at the same time must be set to: AVAudioSessionCategoryMultiRoute instead of AVAudioSessionCategoryPlayAndRecord (setting this is not easy) 
  BOOL ret = [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryMultiRoute error:&error]; 
  if (!ret) { 
    NSLog(@"Failed to set the sound environment"); 
    return; 
  } 
  //Enable audio session 
  ret = [[AVAudioSession sharedInstance] setActive:YES error :&error]; 
  if (!ret) 
  { 
    NSLog(@"Failed to start"); 
    return; 
  } 
} 

- (void)resetPlay 
{ 
    AudioQueueReset(audioQueue);
  }
}

- (void)stop 
{ 
  if (audioQueue != nil) { 
    AudioQueueStop(audioQueue,true); 
  } 

  audioQueue = nil; 
  sysnLock = nil; 
} 

- (instancetype)init 
{ 
  self = [super init]; 
  if (self) { 
    sysnLock = [[NSLock alloc]init]; 

    //Setting the specific information of audio parameters needs to ask the background 
    _audioDescription.mSampleRate = SAMPLE_RATE; 
    _audioDescription.mFormatID = kAudioFormatLinearPCM; 
    _audioDescription.mFormatFlags = kLinearPCMFormatFlagIsSignedFignedInteger 
    | channel_audioDescription.mChannelsPerFrame 
    = 1;
    //Each packet detects data, the number of frames under each data packet, that is, how many frames are in each data 
    packet_audioDescription.mFramesPerPacket = 1; 
    //16bit quantized voice per sampling point occupies the number of bits per sampling 
    point_audioDescription. mBitsPerChannel = 16; 
    _audioDescription.mBytesPerFrame = (_audioDescription.mBitsPerChannel / 8) * _audioDescription.mChannelsPerFrame; 
    //The total number of bytes per packet, the number of bytes per frame * the number of frames per packet 
    _audioDescription.mBytesPerPacket = _PerdioDescription.mBytes * _audioDescription.mFramesPerPacket; 

    // Use the player's internal thread to play the new output 
    AudioQueueNewOutput(&_audioDescription, AudioPlayerAQInputCallback, (__bridge void * _Nullable)(self), nil, 0, 0, &audioQueue); // 

    Set the volume 
    AudioQueueSetParameter(audioQueue, kParam_VoueQueue 1.0); 

    // Initialize the required buffer 
    for (int i = 0; i < QUEUE_BUFFER_SIZE; i++) { 
      audioQueueBufferUsed[i] = false; 
      osState = AudioQueueAllocateBuffer(audioQueue, MIN_SIZE_PER_FRAME, &audioQueueBuffers[i]); 
    }

    osState = AudioQueueStart(audioQueue, NULL);
    if (osState != noErr) {
      NSLog(@"AudioQueueStart Error");
    }
  }
  return self;
}

// 播放数据
-(void)playWithData:(NSData *)data
{
  [sysnLock lock];

  tempData = [NSMutableData new];
  [tempData appendData: data];
  NSUInteger len = tempData.length;
  Byte *bytes = (Byte*)malloc(len);
  [tempData getBytes:bytes length: len];

  int i = 0;
  while (true) { 
    if (!audioQueueBufferUsed[i]) { 
      audioQueueBufferUsed[i] = true; 
      break; 
    }else { 
// callback back Set the buffer status to unused
      i++;
      if (i >= QUEUE_BUFFER_SIZE) {
        i = 0; 
      } 
    } 
  } 

  audioQueueBuffers[i] -> mAudioDataByteSize = (unsigned int)len; 
  // Give mAudioData the len bytes starting from the header address of bytes, and send to the i-th buffer 
  memcpy(audioQueueBuffers[i] -> mAudioData, bytes, len); 

  // Release the object 
  free(bytes); 

  // Put the i-th buffer in the queue, and the rest will be handed over to the system 
  AudioQueueEnqueueBuffer(audioQueue, audioQueueBuffers[i], 0, NULL) ; 

  [sysnLock unlock]; 
} 

// *************************** Callback**************** ********************* 
static void AudioPlayerAQInputCallback(void* inUserData,AudioQueueRef audioQueueRef, AudioQueueBufferRef audioQueueBufferRef) { 

  EYAudio* audio = (__bridge EYAudio*)inUserData;

  [audio resetBufferState:audioQueueRef and:audioQueueBufferRef]; 
} 

- (void)resetBufferState:(AudioQueueRef)audioQueueRef and:(AudioQueueBufferRef)audioQueueBufferRef { 
  // 防止空数据让audioqueue后续都不播放,为了安全防护一下
  if (tempData.length == 0) { 
    audioQueueBufferRef->mAudioDataByteSize = 1 ; 
    Byte* byte = audioQueueBufferRef->mAudioData; 
    bytes = 0; 
    AudioQueueEnqueueBuffer(audioQueueRef, audioQueueBufferRef, 0, NULL); 
  } 

  for (int i = 0; i < QUEUE_BUFFER_SIZE; i++) {  
    // Set this buffer as unused
    if (audioQueueBufferRef == audioQueueBuffers[i]) { 
      audioQueueBufferUsed[i] = false; 
    } 
  } 
} 

@end

 External use: Constantly call the following method to pass in NSData

- (void)playWithData:(NSData *)data;

#pragma mark--Realtime recording

1. Import the system framework AudioToolbox.framework AVFoundation.framework

2. Create the recording class EYRecord

EYRecord.h

#import <Foundation/Foundation.h> 

@interface ESARecord : NSObject 

//Start recording 
- (void)startRecording; 

//Stop recording 
- (void)stopRecording; 

@end

EYRecord.m

#import "ESARecord.h" 
#import <AudioToolbox/AudioToolbox.h> 

#define QUEUE_BUFFER_SIZE 3 // Output audio queue buffer number 
#define kDefaultBufferDurationSeconds 0.03//Adjust this value to make the recording buffer size 960, which will actually be less than or Equal to 960, need to handle the case of less than 960 
#define kDefaultSampleRate 16000 //Define the sampling rate to 16000 

extern NSString * const ESAIntercomNotifationRecordString; 

static BOOL isRecording = NO; 

@interface ESARecord(){ 
  AudioQueueRef _audioQueue; //Output audio playback queue 
  AudioStreamBasicDescriptionForscription;record 
  AudioQueueBufferRef _audioBuffers[QUEUE_BUFFER_SIZE]; //Output audio buffer 
} 
@property (nonatomic, assign) BOOL isRecording; 

@end 

@implementation ESARecord 

- (instancetype)init 
{
  self = [super init]; 
  if (self) { 
    //reset 
    memset(&_recordFormat, 0, sizeof(_recordFormat)); 
    _recordFormat.mSampleRate = kDefaultSampleRate; 
    _recordFormat.mChannelsPerFrame = 1; 
    _recordFormat.mFormatID = kAudioFormatLinearPCM; 

    _ForrecordFlag . = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked; 
    _recordFormat.mBitsPerChannel = 16; 
    _recordFormat.mBytesPerPacket = _recordFormat.mBytesPerFrame = (_recordFormat.mBitsPerChannel / 8) * _recordFormat.mChannelsPerFrame; 
    _recordFormat.mFramesPerPacket = 1; 

    //初始化音频输入队列
    AudioQueueNewInput(&_recordFormat, inputBufferHandler, (__bridge void *)(self), NULL, NULL, 0, &_audioQueue); // Calculate the estimated 

    buffer size 
    int frames = (int)ceil(kDefaultBufferDurationSeconds * _recordFormat.mSampleRate); 
    int bufferByteSize = frames * _recordFormat.mBytesPerFrame; 

    NSLog(@"bufferByteSize%d", bufferByteSize); 

    //Create buffer 
    for (int i = 0; i < QUEUE_BUFFER_SIZE; i++){ 
      AudioQueueAllocateBuffer(_audioQueue, bufferByteSize, &_audioBuffers[i]) ; 
      AudioQueueEnqueueBuffer(_audioQueue, _audioBuffers[i], 0, NULL); 
    } 
  } 
  return self; 
} 

-(void)startRecording 
{ 
  // start recording 
  AudioQueueStart(_audioQueue, NULL);
  isRecording = YES;
}

void inputBufferHandler(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime,UInt32 inNumPackets, const AudioStreamPacketDescription *inPacketDesc)
{
  if (inNumPackets > 0) {
    ESARecord *recorder = (__bridge ESARecord*)inUserData;
    [recorder processAudioBuffer:inBuffer withQueue:inAQ];
  }
  
  if (isRecording) {
    AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
  }
}

- (void)processAudioBuffer:(AudioQueueBufferRef )audioQueueBufferRef withQueue:(AudioQueueRef )audioQueueRef
{
  NSMutableData * dataM = [NSMutableData dataWithBytes:audioQueueBufferRef->mAudioData length:audioQueueBufferRef->mAudioDataByteSize]; 
  
  if (dataM.length < 960) { //The processing length is less than 960, here is 00 
    Byte byte[] = {0x00 }; 
    NSData * zeroData = [[NSData alloc] initWithBytes:byte length:1]; 
    for (NSUInteger i = dataM.length; i < 960; i++) { 
      [dataM appendData:zeroData]; 
    } 
  } 

  // NSLog(@" Real-time recording data--%@", dataM); 
  //This is to send a notification to pass dataM out 
  [[NSNotificationCenter defaultCenter] postNotificationName:@"EYRecordNotifacation" object:@{@"data" : dataM}]; 
} 

- (void)stopRecording 
{ 
  if (isRecording) 
  { 
    isRecording = NO;
    
    //Stop the recording queue and remove the buffer, and close the session, there is no need to consider whether it is successful or not 
    AudioQueueStop(_audioQueue, true); 
    //Remove the buffer, true means to end the recording immediately, false means to finish processing the buffer before ending 
    AudioQueueDispose(_audioQueue, true); 
  } 
  NSLog(@"stop recording"); 
} 

@end

If it doesn't work, try EYRecord.m ----> EYRecord.mm

Guess you like

Origin blog.csdn.net/ForeverMyheart/article/details/120323844