Explore the CoreImage framework of iOS

CoreImage provides image processing, face recognition, image enhancement, image filters, and image transitions. The data it operates comes from Core Graphics, Core Video, Image IO, and uses CPU or GPU for rendering. CoreImage encapsulates the underlying implementation and provides an easy-to-use API for the upper layer.

1. CoreImage framework

The CoreImage framework is divided into: rendering layer, processing layer, API layer. Among them, the rendering layer includes GPU rendering (OpenGL and Metal), CPU rendering (Grand Central Dispatch); the processing layer includes Built-in Filters; the API layer includes Core Graphics, Core Video, and Image IO. As shown below:

2. Image processing

1. Image processing flow

Image processing mainly includes three classes: CIContext, CIFilter, CIImage. An example of the processing flow is as follows:

import CoreImage

// 1、创建CIContext
let context = CIContext()
// 2、创建CIFilter
let filter = CIFilter(name: "CISepiaTone")!
filter.setValue(0.8, forKey: kCIInputIntensityKey)
// 3、创建CIImage
let image = CIImage(contentsOfURL: mURL)
// 4、image应用到filter滤镜
filter.setValue(image, forKey: kCIInputImageKey)
let result = filter.outputImage!
// 5、使用context创建CGImage(用于管理image以及对象复用)
let cgImage = context.createCGImage(result, from: result.extent)
// 6、显示滤镜结果
imageView.image = UIImage(CIImage: result)

2. Image data type

The image is used as an input and output filter, including the following data types:

URL of image file or NSData of image data;

CGImageRef, UIImage, NSBitmapImageRep objects;

Metal, OpenGl texture;

CVImageBufferRef、CVPixelBufferRef;

IOSurfaceRef that shares data across processes;

Memory Bitmap data or CIImageProvider;

3. Create Filter Chain

The creation of the Filter Chain filter chain, the sample code is as follows:

func applyFilterChain(to image: CIImage) -> CIImage {
    // 创建CIFilter,并且设置color滤镜
    let colorFilter = CIFilter(name: "CIPhotoEffectProcess", withInputParameters:
        [kCIInputImageKey: image])!
    
    // 应用bloom滤镜
    let bloomImage = colorFilter.outputImage!.applyingFilter("CIBloom",
                                                             withInputParameters: [
                                                                kCIInputRadiusKey: 10.0,
                                                                kCIInputIntensityKey: 1.0
        ])
    
    // 图像裁剪
    let cropRect = CGRect(x: 350, y: 350, width: 150, height: 150)
    let croppedImage = bloomImage.cropping(to: cropRect)
    
    return croppedImage
}

4. Apply filters to video

Take the Gaussian blur filter applied to the video as an example, the relevant code is as follows:

// 创建高斯模糊filter
let filter = CIFilter(name: "CIGaussianBlur")!
let composition = AVVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in

    let source = request.sourceImage.clampingToExtent()
    filter.setValue(source, forKey: kCIInputImageKey)
    
    // 根据时间戳设置模糊系数
    let seconds = CMTimeGetSeconds(request.compositionTime)
    filter.setValue(seconds * 10.0, forKey: kCIInputRadiusKey)
    
    // 裁剪
    let output = filter.outputImage!.cropping(to: request.sourceImage.extent)
    
    // 应用滤镜结果到视频
    request.finish(with: output, context: nil)
})

5. Use Metal real-time filters

First create a Metal view for image rendering:

class ViewController: UIViewController, MTKViewDelegate {
    
    // Metal设备、纹理、队列
    var device: MTLDevice!
    var commandQueue: MTLCommandQueue!
    var sourceTexture: MTLTexture!
    
    // 高斯模糊
    var context: CIContext!
    let filter = CIFilter(name: "CIGaussianBlur")!
    let colorSpace = CGColorSpaceCreateDeviceRGB()
    
    override func viewDidLoad() {
        super.viewDidLoad()
        // 创建设备、命令队列
        device = MTLCreateSystemDefaultDevice()
        commandQueue = device.newCommandQueue()
        
        let view = self.view as! MTKView
        view.delegate = self
        view.device = device
        view.framebufferOnly = false
        // 创建CIContext
        context = CIContext(mtlDevice: device)
    }
}

The real-time filter rendering process, the sample code is as follows:

public func draw(in view: MTKView) {
    if let currentDrawable = view.currentDrawable {
        let commandBuffer = commandQueue.commandBuffer()
        // 1、使用纹理创建UIImage,并且进行滤镜
        let inputImage = CIImage(mtlTexture: sourceTexture)!
        filter.setValue(inputImage, forKey: kCIInputImageKey)
        filter.setValue(20.0, forKey: kCIInputRadiusKey)
        // 2、使用context进行渲染
        context.render(filter.outputImage!,
            to: currentDrawable.texture,
            commandBuffer: commandBuffer,
            bounds: inputImage.extent,
            colorSpace: colorSpace)
        // 3、使用buffer显示结果
        commandBuffer.present(currentDrawable)
        commandBuffer.commit()
    }
}

3. Face recognition

iOS provides CIDetector for face recognition, the sample code is as follows:

// 1、创建CIContext
CIContext *context = [CIContext context];
// 2、创建options,指定识别精度
NSDictionary *opts = @{ CIDetectorAccuracy : CIDetectorAccuracyHigh };
// 3、创建检测器,指定识别类型
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                          context:context
                                          options:opts];
// 4、指定图像方向
opts = @{ CIDetectorImageOrientation :
          [[mImage properties] valueForKey:kCGImagePropertyOrientation] };
// 5、获取识别结果
NSArray *features = [detector featuresInImage:mImage options:opts];

The face recognition results include: the position of the left eye, right eye, and mouth. The results are judged as follows:

for (CIFaceFeature *f in features) {
    NSLog(@"%@", NSStringFromRect(f.bounds));
 
    if (f.hasLeftEyePosition) {
        NSLog(@"Left eye x=%g y=%g", f.leftEyePosition.x, f.leftEyePosition.y);
    }
    if (f.hasRightEyePosition) {
        NSLog(@"Right eye x=%g y=%g", f.rightEyePosition.x, f.rightEyePosition.y);
    }
    if (f.hasMouthPosition) {
        NSLog(@"Mouth x=%g y=%g", f.mouthPosition.x, f.mouthPosition.y);
    }
}

Let's take a look at the face recognition effect:

4. Image Enhancement

Image enhancements provided by iOS include: red-eye correction, face balance, color enhancement, and shadow highlighting, as shown in the table below:

Filter describe
CIRedEyeCorrection Fix red eye caused by camera flash
CIFaceBalance Adjust face color according to skin tone
CIVibrance Enhance Saturation
CIToneCurve adjust contrast
CIHighlightShadowAdjust Adjust shadow detail

An example use of image augmentation is as follows:

NSDictionary *options = @{ CIDetectorImageOrientation :
                 [[image properties] valueForKey:kCGImagePropertyOrientation] };
NSArray *adjustments = [image autoAdjustmentFiltersWithOptions:options];
for (CIFilter *filter in adjustments) {
     [filter setValue:image forKey:kCIInputImageKey];
     myImage = filter.outputImage;
}

Guess you like

Origin blog.csdn.net/u011686167/article/details/130898957