AVFoundation video stream processing

frame

First, we needed to make a preliminary understanding of the framework as a whole.

AVFoundation the relevant stack frame position:

1.jpg

To capture video, so we need several classes (and other subclasses).

  • AVCaptureDevice represents the input devices, such as cameras and microphones.

  • It represents the source of input data AVCaptureInput

  • AVCaptureOutput represents the output of the data source

  • AVCaptureSession for coordinating data flow between the input and output

And there AVCaptureVideoPreviewLayer camera provides a preview function

Such a picture can be summarized: 

2.jpg

example

AVFoundation practical application to capture video stream is not complicated.

Talk is Cheap,Show me the Code.

We briefly describe the process of using the code to capture video with AVFoundation, other capture audio, still images of the process is much the same.

1. Create AVCaputureSession.

As the central coordinating input and output, the first step we need to create a Session

AVCaptureSession *session = [[AVCaptureSession alloc] init];

2. Create AVCaptureDevice

Create a AVCaptureDevice behalf input device. Here we have developed an imaging device.

    AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

3. Create AVCaptureDeviceInput, and added to the Session

We need to make use AVCaptureDeviceInput device is added to the session, AVCaptureDeviceInput responsible for managing the device port. We can understand it as a device abstraction. A device may be able to provide video and audio capture at the same time. We can use AVCaptureDeviceInput respectively to represent the video input and audio input.

NSError *error;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
[session addInput:input];

4. Create AVCaptureOutput

In order to obtain data from the session, we need to create a AVCaptureOutput

    AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc]init];

5. Set output delegate, the output is added to the session, the proxy video stream analysis method

In order to analyze the video stream, we need to set the delegate to output, and specify the delegate method is called in which thread. The main need is thread must be serial, video frames arrive in order to ensure that.

videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[videoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue];
[session addOutput:videoDataOutput];

We can analyze the video stream delegate method.

captureOutput:didOutputSampleBuffer:fromConnection:,

6. Start Capture

[session startRunning];

By simple example above, I can see what AVFoundation use to capture the video stream is not very complicated. Focus is on the process used to know details of the configuration, and performance issues.

Real

After learning the basics, let's use a concrete example to illustrate.

We do AVFoundation based on a two-dimensional code recognition applications: QRCatcher

3.jpg

Application has been added to our AppStore  and complete open source

Project Architecture:

|- Model
    |- URLEntity
|- View
    |- QRURLTableViewCell
    |- QRTabBar
|- Controller
    |- QRCatchViewController
    |- QRURLViewController
|- Tools
    |- NSString+Tools
    |- NSObject+Macro

The project is not complicated. A typical MVC architecture.

  • Model layer only a URLEntity URL information is captured for storage. The project is also a way to learn about CoreData. Feel good, very happy with the work NSFetchedResultsController.

  • View layer is a TableViewCell and Tabbar, mainly for changing the succession Tabbar tabbar height.

  • Controller layer QRCatchViewController responsible for capture and storage of two-dimensional code information, QRURLViewController responsible for the management and display URL information collected.

  • Tools are some of the aid to facilitate the development of the class. From a tool library I usually use to collect written maintenance (open link) in this project is mainly to check whether the URL is legitimate, determine the type of equipment and so on.

After introduction of basic infrastructure, we put energy back into the AVFoundation module up. In this project, AVFoundation primarily responsible for scanning and analysis of two-dimensional code.

我们直接来看QRCatchViewController中涉及的代码。

对于我们这个应用来说,只需两步核心步骤即可。

1.设置AVFoundation

- (void)setupAVFoundation
{
    //session
    self.session = [[AVCaptureSession alloc] init];
    //device
    AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    NSError *error = nil;
    //input
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
    if(input) {
        [self.session addInput:input];
    } else {
        NSLog(@"%@", error);
        return;
    }
    //output
    AVCaptureMetadataOutput *output = [[AVCaptureMetadataOutput alloc] init];
    [self.session addOutput:output];
    [output setMetadataObjectTypes:@[AVMetadataObjectTypeQRCode]];
    [output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
    //add preview layer
    self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.session];
    [self.preview.layer addSublayer:self.previewLayer];
    //start
    [self.session startRunning];
}

在这里我们可以看到和上面创建捕捉视频流的步骤基本是一致的。

也就是

  1. 创建session

  2. 创建device

  3. 创建input

  4. 创建output。

    这里是与捕捉视频流所不一致的地方。我们捕捉视频流需要的是AVCaptureVideoDataOutput,而在这里我们需要捕捉的是二维码信息。因此我们需要AVCaptureMetadataOutput。并且我们需要指定捕捉的metadataObject类型。在这里我们指定的是AVMetadataObjectTypeQRCode,我们还可以指定其他类型,例如PDF417条码类型。

    完整的可指定列表可以在这里找到。

    然后我们还要指定处理这些信息的delegate与队列。

  5. 开始录制

2.实现代理方法:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection
{
    for (AVMetadataMachineReadableCodeObject *metadata in metadataObjects) {
        if ([metadata.type isEqualToString:AVMetadataObjectTypeQRCode]) {
            self.borderView.hidden = NO;
            if ([metadata.stringValue isURL])
            {
                [[UIApplication sharedApplication] openURL:[NSString HTTPURLFromString:metadata.stringValue]];
                [self insertURLEntityWithURL:metadata.stringValue];
                self.stringLabel.text = metadata.stringValue;
            }
            else
            {
                self.stringLabel.text = metadata.stringValue;
            }
        }
    }
}

We need to receive data in which the proxy methods, and processed according to their needs. Here I simply carried out the test URL, if yes then open safari browsing.

to sum up

Here it is just to show AVFoundation handle video streaming capabilities through the application of a two-dimensional code. In fact, AVFoundation be able to do more. Editing functions can be performed, the processing track and the like. If we need to video and audio-related transaction processing, you may wish to proceed with looking before third-party solutions, take a look at this powerful module Apple brought us.

Guess you like

Origin www.cnblogs.com/pioneerMax/p/11419646.html