How to convert YUV NV21 format CVPixelBufferRef to opencv's RGB format cv::Mat on IOS

Preface

Because of business needs, we need to make such a conversion. There are currently two conversion methods written.

When running on an iPhoneX real machine, one method requires 24ms for one frame and the CPU usage is 85%, and the other method requires 17ms for one frame and the CPU usage is 140%. Let’s talk about it in detail below.

method one

The conversion idea is the route of CVPixelBufferRef->UIImage->cv::Mat.

Direct method:

First, the method of CVPixelBufferRef->UIImage

- (UIImage*)uiImageFromPixelBuffer:(CVPixelBufferRef)p {
 
    CIImage* ciImage = [CIImage imageWithCVPixelBuffer : p];
 
    CIContext* context = [CIContext contextWithOptions : @{kCIContextUseSoftwareRenderer : @(YES)}];
 
    CGRect rect = CGRectMake(0, 0, CVPixelBufferGetWidth(p), CVPixelBufferGetHeight(p));
 
    CGImageRef videoImage = [context createCGImage : ciImage fromRect : rect];
 
    UIImage* image = [UIImage imageWithCGImage : videoImage];
 
    CGImageRelease(videoImage);
 
    return image;
}

Then there is the method of UIImage->cv::Mat

- (cv::Mat)cvMatFromUIImage:(UIImage *)image
{
  CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
  CGFloat cols = image.size.width;
  CGFloat rows = image.size.height;
  cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels (color channels + alpha)
  CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,                 // Pointer to  data
                                                 cols,                       // Width of bitmap
                                                 rows,                       // Height of bitmap
                                                 8,                          // Bits per component
                                                 cvMat.step[0],              // Bytes per row
                                                 colorSpace,                 // Colorspace
                                                 kCGImageAlphaNoneSkipLast |
                                                 kCGBitmapByteOrderDefault); // Bitmap info flags
  CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
  CGContextRelease(contextRef);
  return cvMat;
}

When calling these two methods, you need to add an automatic release pool, otherwise the memory will explode.

This conversion route is slow but the CPU usage is not high.

Method Two

The conversion idea is CVPixelBufferRef->YUV cv::Mat->RGB cv::Mat

First method

- (cv::Mat)cvMatFromPixelBuffer:(CVPixelBufferRef)p
{
    CVPixelBufferLockBaseAddress(p, 0);
    unsigned char *base = (unsigned char *)CVPixelBufferGetBaseAddress(p);
    uint8_t *src_y = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(p, 0);
    uint8_t *src_uv =(unsigned char *) CVPixelBufferGetBaseAddressOfPlane(p, 1);
    int height = (int)CVPixelBufferGetHeight( p );
    int width = (int)CVPixelBufferGetWidth(p);
    cv::Mat yuvimg(height*3/2, width, CV_8UC1);
    for(int i=0; i<height*width; i++){
        *(yuvimg.data + i) = *(src_y + i);
    }
    int delta_h = height*3/2 - height;
    for(int i=0; i<delta_h; i++){
        for(int j=0; j<width; j++){
            *(yuvimg.data + (height + i)*width + j) = *(src_uv + i*width + j);
        }
    }
    //writeImage2Document("testim.jpg", yuvimg);
    cv::Mat rgbimg(height, width, CV_8UC4);
    cv::cvtColor(yuvimg, rgbimg, cv::COLOR_YUV2RGBA_NV21);
    //CVPixelBufferRef testimg = [self getImageBufferFromMat:rgbimg];
    CVPixelBufferUnlockBaseAddress( p, 0 );
  return rgbimg;
}

This method directly uses opencv to convert, which will be faster, but the CPU usage will be high.

In addition, it should be noted that YUV data needs to be written into cv::Mat separately from Y and UV, instead of directly obtaining the address of CVPixelBufferGetBaseAddress. I don't know why. If you directly take the address of CVPixelBufferGetBaseAddress, there will be a garbled code at the starting address, which will cause the entire screen to move to the right.

Call it a day~

For super easy debugging,
go to the application market

Guess you like

Origin blog.csdn.net/qq_19313495/article/details/127357984