AVAssetReader video data reading

Introduction to AVAssetReader

The media samples in the video file can be obtained through AVAssetReader, and the undecoded original media samples can be directly read from the memory to obtain the samples decoded into a renderable form.
The document states that the AVAssetrader pipeline is multi-threaded. After initialization, the reader loads and processes a reasonable amount of sample data before use, and the latency of retrieval operations such as copyNextSampleBuffer (AVAssetReaderOutput) is very low. But AVAssetReader is still not suitable for real-time sources, and its performance cannot be guaranteed for real-time operation.
Because some sample data needs to be loaded and processed before use, the memory occupied may be relatively large. It is necessary to pay attention to the number of readers used at the same time. The higher the video pixel, the larger the memory occupied.
 

AVAssetReader initialization

Use AVAsset to initialize the AVAssetReader. As mentioned earlier, the sample data will be loaded after the initialization, so this step will already have an impression of the memory. If the memory is tight, do not initialize it in advance.

 NSError *createReaderError;
 _reader = [[AVAssetReader alloc]initWithAsset:_asset error:&createReaderError];

 

AVAssetReader set Output

Before starting to read, you need to add output to control which tracks in the asset used for initialization are read, and how to configure how to read it.
There are other subclasses of AVAssetReaderOutput, such as AVAssetReaderVideoCompositionOutput, AVAssetReaderAudioMixOutput and AVAssetReaderSampleReferenceOutput.
Use AVAssetReaderTrackOutput to demonstrate here. A track is needed to initialize, and the track is obtained from the asset.

NSArray tracks = [_asset tracksWithMediaType:AVMediaTypeAudio];
if (tracks.count > 0) {
    AVAssetTrack audioTrack = [tracks objectAtIndex:0];
}

or

 NSArray tracks = [_asset tracksWithMediaType:AVMediaTypeVideo];
if (tracks.count > 0) {
    AVAssetTrack videoTrack = [tracks objectAtIndex:0];
}

You can also configure the output format, and you can refer to the documentation for more configurations.

NSDictionary * const VideoAssetTrackReaderOutputOptions = @{(id) kCVPixelBufferOpenGLESCompatibilityKey : @(YES),
                                                            (id) kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary],
                                                            (NSString*)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA)};
_readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:_track
                                                           outputSettings:VideoAssetTrackReaderOutputOptions];
if ([_reader canAddOutput:_readerOutput]) {
    [_reader addOutput:_readerOutput];
}

 

seek operation

AVAssetReader is not suitable for frequent random read operations. If you need frequent seek, you may need to implement it in other ways.
Before starting to read, you can set the reading range. After starting to read, you can't modify it, you can only read sequentially backwards.
There are two options to adjust the reading range:

  1. You can set supportsRandomAccess in output. When it is true, you can reset the read range, but the caller needs to call copyNextSampleBuffer until the method returns NULL.
  2. Or reinitialize an AVAssetReader to set the read time.
    If you try the first solution, you need to use seek, you can try to set a not too long interval each time, to ensure that reading the entire interval will not take too much time, and the time interval is best divided by key frames.
     

Read data

_reader.timeRange = range;
[_reader startReading]; 
_sampleBuffer = [_readerOutput copyNextSampleBuffer];

CMSampleBuffer provides methods to obtain decoded data, such as image information can be used

CVImageBufferRef pixelBuffer = 
CMSampleBufferGetImageBuffer(_sampleBuffer);

It should be noted that when CMSampleBuffer is used up, you need to call release to release

CFRelease(_sampleBuffer);

 

Code example

NSDictionary * const AssetOptions = @{AVURLAssetPreferPreciseDurationAndTimingKey:@YES};
NSDictionary * const VideoAssetTrackReaderOutputOptions = @{(id) kCVPixelBufferOpenGLESCompatibilityKey : @(YES),
                                                                (id) kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary],
                                                                (NSString*)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA)};
_videoAsset = [[AVURLAsset alloc] initWithURL:[NSURL fileURLWithPath: filePath]
                                                                    options:AssetOptions];
_videoTrack = [[mPrivate->mVideoAsset tracksWithMediaType:AVMediaTypeVideo] firstObject];
if (_videoTrack) {
    NSError *createReaderError;
    _reader = [[AVAssetReader alloc] initWithAsset:_videoAsset error:&createReaderError];
    if (!createReaderError) {
        _readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:mPrivate->_videoTrack
                                                                             outputSettings:VideoAssetTrackReaderOutputOptions];
        _readerOutput.supportsRandomAccess = YES;
        if ([_reader canAddOutput:_readerOutput])
        {
            [_reader addOutput:_readerOutput];
        }
        [_reader startReading];

        if (_reader.status == AVAssetReaderStatusReading || _reader.status == AVAssetReaderStatusCompleted) {
            CMSampleBufferRef samplebuffer = [_readerOutput copyNextSampleBuffer];
            if (samplebuffer) {
            //绘制samepleBuffer中的画面
                CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(samplebuffer);
                CVPixelBufferLockBaseAddress(imageBuffer, 0);
                uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
                size_t width = CVPixelBufferGetWidth(imageBuffer);
                size_t height = CVPixelBufferGetHeight(imageBuffer);
                size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
                size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);

                CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
                CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize, NULL);
                CGImageRef cgImage = CGImageCreate(width, height, 8, 32, bytesPerRow, rgbColorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrderDefault, provider, NULL, true, kCGRenderingIntentDefault);
                CGImageRelease(cgImage);
                CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
                CFRelease(samplebuffer);
            }
        }
    }

}

Guess you like

Origin blog.csdn.net/weixin_41191739/article/details/112783773