In-depth analysis of iOS image rendering and the use of CGImageRef (with source code)

Correct acquisition of image parameters

First take a picture as a test.
test picture
The picture parameters are as follows:
Image parameters
The size of the picture is:
- 1236 pixels wide
- 748 pixels high
- RGB color space -
color LCD description file
- with Alpha channel
Please remember these parameter, we will demonstrate how to get it correctly later.

Put this picture in three positions respectively
Position 1: 2x picture position
insert image description here

Position 2: 3x map position
insert image description here

Position 3: In the bundle
insert image description here

Then we obtain the pictures in these three different positions through the code, and then output its width and height as well as the width and height obtained by the CGImageRef object. The code is as follows: the output result is shown in the figure
insert image description here
:

insert image description here

You will find that if you simply use the [UIImage imageNamed:@""]; method to obtain the image, the width and height of the directly output image do not match the original size of the image: if it is an image at 2X position, the output height is the
image The result of the real height/2
If it is a picture at a position of 3X, the output height is the result of the real height of the picture/3
If the picture is obtained directly from the bundle, the output width and height are the real result.
The reason is because IOS will process the ratio according to the location where we store the image. When loading, there will be no problem if it is just a simple display, but if it is to crop the image area, then you need to pay attention to using [UIImage imageNamed: The size of the image object obtained by @""] is not the real size of the image, but the structure pointer of the image should be obtained by using CGImageRef, and then the real image size can be obtained.

The description of [UIImage imageNamed:@""] in the official document is as follows:
insert image description here
the system will look for this image from the cache, and make appropriate adjustments to the image to return an image suitable for display on the screen, so if this image will be displayed in the project If you use it many times, you can use this method to load images to improve memory resource utilization efficiency and loading speed. If you only use it once, it is recommended to use the imageWithContentsOfFile: method to load images, so that the images will not be loaded into the cache. Less memory pressure.

Two Use CGImageRef to get detailed parameters of the picture

CGImageRef also stores a lot of image information, print it and
insert image description here
you can see the output parameters are:
CGColorSpace
kCGColorSpaceICCBased: ICCBased color space ICCBased color space is based on the cross-platform color profile
kCGColorSpaceModelRGB: RGB color space
LCD : Use LCD color gamut
width = 1236: the width of the picture is 1236 pixels
height = 748: the height of the picture is 748 pixels
bpc = 8: each channel has 8bit, which means it supports 256 color values
​​bpp = 32: each A pixel has 32 bits, R channel 8 bits + G channel 8 bits + B channel 8 bits + A channel 8 bits = 32 bits row
bytes = 4944: each row occupies 4944 bytes, the calculation formula is (32 bits per pixel *1236 pixels per line/8bit = 4944 bytes
kCGImageAlphaNoneSkipLast: has an alpha channel and is at the end, but does not store the alhpa value
0 (default byte order): This actually stores CGBitmapInfo, using the default byte order
kCGImagePixelFormatPacked : Pixel format information, compressed according to the pixel format
is mask? No: Whether it is a Mask layer, the Mask layer is a method to set its display part and non-display part to achieve special effects
has masking color? If set, the corresponding color will become transparent
has soft mask: whether there is a gradient mask
has matte? No: whether there is a mask, both the mask and matte are used to control the transparent area of ​​​​the image. Usually masks are drawn temporarily on layers (nodes). Matte often uses ready-made black and white images as images to control transparent areas.
should interpolate? Yes: whether anti-aliasing

Using CGImageRef we can get almost any parameter we want.

Three common methods to obtain the value of the CGImageRef attribute

Get the reference of CGImageRef first. The methods introduced in this chapter are all based on this reference demonstration:

	UIImage *img = [UIImage imageNamed:@"8BitImg2x"];
	CGImageRef imgRef = [img CGImage];

1. Get the width of the image

size_t CGImageGetWidth(CGImageRef cg_nullable image)

This method returns a value of type sizt_t. The full name of size_t is size type, which is an unsigned integer. The actual type of size_t is related to the operating system.
It is defined as typedef unsigned int size_t in the 32-bit system;
it is defined as typedef unsigned long size_t in the 64-bit system;
it can be used as an unsigned integer when using it

The CGImageGetWidth method returns the pixel width of the image, and the usage and output examples are as follows:

    size_t imgWidth = CGImageGetWidth(imgRef);
    printf("图片的像素宽度为:%zu",imgWidth);
	输出结果:图片的像素宽度为:1236

2. Get the height of the picture

size_t CGImageGetHeight(CGImageRef cg_nullable image)

Similar to the method of obtaining the width of the image, this method returns the height of the image, the usage example and the output of the result are as follows

	size_t imgHeight = CGImageGetHeight(imgRef);
    printf("图片的像素高度为:%zu",imgHeight);
	输出结果:图片的像素高度为:748

3. Get the number of bits occupied by each color channel of the image (bpp)

CGImageGetBitsPerComponent(CGImageRef cg_nullable image)

Examples of usage and output are as follows:

    size_t bitsPerComponent = CGImageGetBitsPerComponent(imgRef);
    printf("每个通道占用的位数:%zu",bitsPerComponent);
	输出结果:每个通道占用的位数:8

4. Get the number of bits occupied by each pixel

size_t CGImageGetBitsPerPixel(CGImageRef cg_nullable image)

Examples of usage and output are as follows:

	size_t bitsPerPixel = CGImageGetBitsPerPixel(imgRef);
    printf("每个像素占用的位数:%zu",bitsPerPixel);
	输出结果:每个像素占用的位数:32

In fact, through the CGImageGetBitsPerComponent method and CGImageGetBitsPerPixel, we can already know that the image format is 32 bits, each channel is 8 bits, so the image is 4 channels, RGBA format, with an alpha channel image, which we need to judge whether the image is Still useful in scenes with an alpha channel.

For this method, there is a further study at the end of the article, which must be read. (emphasis)

5. Get the number of bytes occupied by each row of pixels

size_t CGImageGetBytesPerRow(CGImageRef cg_nullable image)

Examples of usage and output are as follows:

    size_t bitsPerRow = CGImageGetBytesPerRow(imgRef);
    printf("每行像素占用的位数:%zu",bitsPerRow);
	输出结果:每行像素占用的位数:4944

The calculation method is: the width of the picture is 1236 (the width is 1236 pixels) * the number of bits occupied by each pixel is 32 ÷ the number of bits per byte is 8 (1byte = 8bit) = the number of bytes occupied by each row of pixels is 4944

6. Get the color space of the picture

CGColorSpaceRef __nullable CGImageGetColorSpace(CGImageRef cg_nullable image)

Examples of usage and output are as follows:

    CGColorSpaceRef colorSpaceRef = CGImageGetColorSpace(imgRef);
    NSLog(@"颜色空间为%@",colorSpaceRef);

If it is an 8-bit three-channel image, the result returned by the CGImageGetBitsPerPixel method is 24. Take an image without an alpha channel to verify: the
usage and output examples are as follows:

	输出结果:颜色空间为<CGColorSpace 0x60000151fea0> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; sRGB IEC61966-2.1)

Represents the use of the RGB color space

7. Get the alpha information of the picture

Examples of usage and output are as follows:

    CGImageAlphaInfo imageAlphaInfo = CGImageGetAlphaInfo(imgRef);
    NSLog(@"alpha通道信息为 %u",imageAlphaInfo);
	输出结果:alpha通道信息为 5

Check through the enumeration value of CGImageAlphaInfo

typedef CF_ENUM(uint32_t, CGImageAlphaInfo) {
    kCGImageAlphaNone,               /* For example, RGB. */
    kCGImageAlphaPremultipliedLast,  /* For example, premultiplied RGBA */
    kCGImageAlphaPremultipliedFirst, /* For example, premultiplied ARGB */
    kCGImageAlphaLast,               /* For example, non-premultiplied RGBA */
    kCGImageAlphaFirst,              /* For example, non-premultiplied ARGB */
    kCGImageAlphaNoneSkipLast,       /* For example, RBGX. */
    kCGImageAlphaNoneSkipFirst,      /* For example, XRGB. */
    kCGImageAlphaOnly                /* No color data, alpha data only */
};

It can be seen that the alpha information of the picture is kCGImageAlphaNoneSkipLast, which means that the picture has an alpha channel and is at the end, but this value is ignored and does not participate in the calculation of the actual display of the picture.

The role of CGImageAlphaInfo:
(1) Whether the bitmap contains an alpha channel
(2) The position of the alpha bit in the image data, whether it is the first or the end of the bimap
(3) Whether the alpha value is premultiplied, and whether each channel needs to be displayed when displaying Multiplied by the alpha value for the final displayed value
Alpha blending is achieved by combining the color components of the source image with the color components of the destination image using a linear interpolation formula.

The description for each enumeration value is as follows
kCGImageAlphaFirst The alpha component is stored in the most significant bit of each pixel. For example, non-premultiplied ARGB.
kCGImageAlphaLast The alpha component is stored in the least significant bits of each pixel. For example, non-premultiplied RGBA.
kCGImageAlphaNone has no alpha channel.
kCGImageAlphaNoneSkipFirst has no alpha channel. If the total size of the pixels is greater than the space required by the number of color components in the color space, the most significant bits are ignored.
kCGImageAlphaOnly has no color data, only an alpha channel.
kCGImageAlphaNoneSkipLast has no alpha channel.
kCGImageAlphaPremultipliedFirst The alpha component is stored in the most significant bits of each pixel, and the color component has been multiplied by this alpha value. For example, premultiplied ARGB.
kCGImageAlphaPremultipliedLast The alpha component is stored in the least significant bits of each pixel, and the color component has been multiplied by this alpha value. For example, premultiplied RGBA.

8. Obtain byte ordering in pixels

CGImageByteOrderInfo CGImageGetByteOrderInfo(CGImageRef cg_nullable image)

Examples of usage and output are as follows:

    CGImageByteOrderInfo imageByteOrderInfo = CGImageGetByteOrderInfo(imgRef);
    NSLog(@"imageByteOrderInfo信息为 %d",imageByteOrderInfo);
	输出结果:imageByteOrderInfo信息为 0

The enumeration value of CGImageByteOrderInfoCGImageAlphaInfo to check

typedef CF_ENUM(uint32_t, CGImageByteOrderInfo) {
    kCGImageByteOrderMask     = 0x7000,
    kCGImageByteOrderDefault  = (0 << 12),
    kCGImageByteOrder16Little = (1 << 12),
    kCGImageByteOrder32Little = (2 << 12),
    kCGImageByteOrder16Big    = (3 << 12),
    kCGImageByteOrder32Big    = (4 << 12)
} CG_AVAILABLE_STARTING(10.0, 2.0);

It can be seen that the byte ordering bits used by the image are sorted by default kCGImageByteOrderDefault

CGImageByteOrderInfo defines the order of pixel reading in a bitmap, which is mainly divided into big-endian and small-endian reading modes.
Big-endian mode: the high byte is stored in the low address of the memory, and the low byte of the data is stored in the high address of the memory.
Little endian mode: the high byte is stored in the high address of the memory, and the low byte of the data is stored in the low address of the
memory When ordering this requirement, you need to pay attention to the reading order of the pixel bytes. If the reading is wrong, it will cause errors in the display of colors and pictures.

Commonly used enumeration values ​​are defined as follows
kCGImageByteOrder16Little 16-bit little-endian read
kCGImageByteOrder16Big 16-bit big-endian read
kCGImageByteOrder32Little 32-bit little-endian read
kCGImageByteOrder32Big 23-bit big-endian read

It should be noted that the quartz 2D engine only supports three reading modes: kCGImageByteOrderDefault, kCGImageByteOrder16Little, and kCGImageByteOrder16Big on the iOS side

8. Get the CGBitmapInfo information of the picture

CGBitmapInfo CGImageGetBitmapInfo(CGImageRef cg_nullable image)

Examples of usage and output are as follows:

    CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imgRef);
    NSLog(@"bitmapInfo信息为 %d",bitmapInfo);
	输出结果:bitmapInfo信息为 5

The enumeration values ​​are defined as follows:

typedef CF_OPTIONS(uint32_t, CGBitmapInfo) {
    kCGBitmapAlphaInfoMask = 0x1F,

    kCGBitmapFloatInfoMask = 0xF00,
    kCGBitmapFloatComponents = (1 << 8),

    kCGBitmapByteOrderMask     = kCGImageByteOrderMask,
    kCGBitmapByteOrderDefault  = kCGImageByteOrderDefault,
    kCGBitmapByteOrder16Little = kCGImageByteOrder16Little,
    kCGBitmapByteOrder32Little = kCGImageByteOrder32Little,
    kCGBitmapByteOrder16Big    = kCGImageByteOrder16Big,
    kCGBitmapByteOrder32Big    = kCGImageByteOrder32Big
} CG_AVAILABLE_STARTING(10.0, 2.0);

It should be noted that when using CGBitmapInfo, it needs to be used in conjunction with appropriate constants using the | operator. The official documentation explains as follows: So when
insert image description here
we define CGBitmapInfo as a parameter value, we usually use CGImageAlphaInfo and CGImageByteOrderInfo together. To define whether the image contains an alpha channel and the reading order, examples are as follows:

CGBitmapInfo alphaInfo = = kCGImageAlphaLast  | kCGImageByteOrder16Little; //alpha通道在末尾,且使用16位小端读取的方式来读取


insert image description here
So use CGImageGetBitmapInfo to get the information of the picture and output 5, you should look at kCGImageAlphaNoneSkipLast | kCGImageByteOrderDefault = 5 with the picture information obtained at the beginning of the article,
so when we directly get the value of CGBitmapInfo, the picture output result is 5

The rest of the methods are used less frequently, you can refer to the api documentation

Attachment: Deep study of CGImageGetBitsPerPixel

In actual project development, the number of image digits obtained by using CGImageGetBitsPerPixel is not necessarily accurate.
Next, five different images are used to demonstrate the problem of obtaining pixel digits by CGImageGetBitsPerPixel:

The first picture format is 8bpc 24bpp Png format picture, that is, each channel has 8 bits, no alpha channel, bitmap is 24 bits , but the output result of using CGImageGetBitsPerPixel is 32 bits, the result is wrong

The second picture format is 8bpc 24bpp Jpg format picture, that is, each channel has 8 bits, no alpha channel, bitmap is 24 bits , but the output result of using CGImageGetBitsPerPixel is 32 bits, the result is wrong

The third picture format is 8bpc 24bpp Tiff format picture, that is, each channel has 8 bits, without alpha channel, and the bitmap is 24 bits , but the output result of using CGImageGetBitsPerPixel is 24 bits, and the result is correct

The fourth picture format is 16bpc 48bpp Png format picture, that is, each channel has 16 bits, without alpha channel, and the bitmap is 48 bits, but the output result of using CGImageGetBitsPerPixel is 48 bits, and the result is correct

The fifth image format is 16bpc 48bpp Tiff format image, that is, each channel has 16 bits, without alpha channel, and the bitmap is 48 bits, but the output result of using CGImageGetBitsPerPixel is 48 bits, and the result is correct

Use the information of five pictures as follows [Naming rules (take img3CPng8Bit as an example) img + 3C (3channel 3 channels) + Png (picture format) + 8Bit (channel bit depth) = img3CPng8Bit, unified naming rules to facilitate code demonstration and distinction] read
insert image description here
code And the output results are as follows:

	NSString *path8BitPng = [[NSBundle mainBundle] pathForResource:@"img3CPng8Bit" ofType:@".png"];
    NSData *bundleImgData8BitPng = [NSData dataWithContentsOfFile:path8BitPng];
    UIImage *bundleImage8BitPng = [UIImage imageWithData:bundleImgData8BitPng];
    CGImageRef imgRef8BitPng = [bundleImage8BitPng  CGImage];
    size_t bitsPerComponent8BitPng  = CGImageGetBitsPerComponent(imgRef8BitPng );
    printf("img8Bit3CPng 每个通道占用的位数:%zu \n",bitsPerComponent8BitPng );
    size_t bitsPerPixel8BitPng  = CGImageGetBitsPerPixel(imgRef8BitPng);
    printf("img8Bit3CPng 每个像素占用的位数:%zu \n\n",bitsPerPixel8BitPng );
    
    NSString *path8BitJpg = [[NSBundle mainBundle] pathForResource:@"img3Cjpg8Bit" ofType:@".jpg"];
    NSData *bundleImgData8BitJpg = [NSData dataWithContentsOfFile:path8BitJpg];
    UIImage *bundleImage8BitJpg = [UIImage imageWithData:bundleImgData8BitJpg];
    CGImageRef imgRef8BitJpg = [bundleImage8BitJpg  CGImage];
    size_t bitsPerComponent8BitJpg  = CGImageGetBitsPerComponent(imgRef8BitJpg );
    printf("img8Bit3CJpg 每个通道占用的位数:%zu \n",bitsPerComponent8BitJpg );
    size_t bitsPerPixe8BitJpg  = CGImageGetBitsPerPixel(imgRef8BitJpg);
    printf("img8Bit3CJpg 每个像素占用的位数:%zu \n\n",bitsPerPixe8BitJpg );
    
    NSString *path8Bit3CTiff = [[NSBundle mainBundle] pathForResource:@"img3CTiff8Bit" ofType:@".tif"];
    NSData *bundleImgData8Bit3CTiff = [NSData dataWithContentsOfFile:path8Bit3CTiff];
    UIImage *bundleImage8Bit3CTiff = [UIImage imageWithData:bundleImgData8Bit3CTiff];
    CGImageRef imgRef8Bit3CTiff = [bundleImage8Bit3CTiff  CGImage];
    size_t bitsPerComponent8Bit3CTiff  = CGImageGetBitsPerComponent(imgRef8Bit3CTiff );
    printf("img8Bit3CTiff 每个通道占用的位数:%zu \n",bitsPerComponent8Bit3CTiff );
    size_t bitsPerPixel8Bit3CTiff  = CGImageGetBitsPerPixel(imgRef8Bit3CTiff);
    printf("img8Bit3CTiff 每个像素占用的位数:%zu \n\n",bitsPerPixel8Bit3CTiff );
    
    NSString *path16Bit3CPng = [[NSBundle mainBundle] pathForResource:@"img3CPng16Bit" ofType:@".png"];
    NSData *bundleImgData16Bit3CPng = [NSData dataWithContentsOfFile:path16Bit3CPng];
    UIImage *bundleImage16Bit3CPng = [UIImage imageWithData:bundleImgData16Bit3CPng];
    CGImageRef imgRef16Bit3CPng = [bundleImage16Bit3CPng  CGImage];
    size_t bitsPerComponent16Bit3CPng  = CGImageGetBitsPerComponent(imgRef16Bit3CPng );
    printf("img16Bit3CPng 每个通道占用的位数:%zu \n",bitsPerComponent16Bit3CPng );
    size_t bitsPerPixe16Bit3CPng  = CGImageGetBitsPerPixel(imgRef16Bit3CPng);
    printf("img16Bit3CPng 每个像素占用的位数:%zu \n\n",bitsPerPixe16Bit3CPng );
    
    NSString *path16Bit3CTiff = [[NSBundle mainBundle] pathForResource:@"img3CTiff16Bit" ofType:@".tif"];
    NSData *bundleImgData16Bit3CTiff = [NSData dataWithContentsOfFile:path16Bit3CTiff];
    UIImage *bundleImage16Bit3CTiff = [UIImage imageWithData:bundleImgData16Bit3CTiff];
    CGImageRef imgRef16Bit3CTiff = [bundleImage16Bit3CTiff  CGImage];
    size_t bitsPerComponent16Bit3CTiff  = CGImageGetBitsPerComponent(imgRef16Bit3CTiff );
    printf("img16Bit3CTiff 每个通道占用的位数:%zu \n",bitsPerComponent16Bit3CTiff );
    size_t bitsPerPixel16Bit3CTiff  = CGImageGetBitsPerPixel(imgRef16Bit3CTiff);
    printf("img16Bit3CTiff 每个像素占用的位数:%zu \n\n",bitsPerPixel16Bit3CTiff );
输出结果为:
img8Bit3CPng 每个通道占用的位数:8 
img8Bit3CPng 每个像素占用的位数:32 

img8Bit3CJpg 每个通道占用的位数:8 
img8Bit3CJpg 每个像素占用的位数:32 

img8Bit3CTiff 每个通道占用的位数:8 
img8Bit3CTiff 每个像素占用的位数:24 

img16Bit3CPng 每个通道占用的位数:16 
img16Bit3CPng 每个像素占用的位数:48 

img16Bit3CTiff 每个通道占用的位数:16 
img16Bit3CTiff 每个像素占用的位数:48 

You can see that
the 8-bit 3-channel Png and jpeg format images obtained by using the CGImageGetBitsPerPixel method are not 24, but 32;
8-bit 3-channel tiff format, 16-bit 3-channel png format, 16-bit 3-channel tiff format pictures The output is correct

The guess may be that when Apple loads the most commonly used 8bpc format png and jpeg images, if the image itself does not have an alpha channel, in the process of reading image data from the local, it will add an opaque alpha channel of 255 to the image by default. , to demonstrate the conjecture as follows:

Integrate the opencv framework, use the framework to create a matrix with a width of 3000 and a height of 4000, and then convert the matrix into an 8bpc 3-channel pure red picture, and then use the read information of the picture, the code and output are as follows

    cv::Mat originImgMat1 = Mat(3000, 4000, CV_8UC3);
    for(int row = 0; row < 3000; row ++){
        for(int col = 0; col < 4000; col ++){
            originImgMat1.at<Vec3b>(row,col)[0] = 255;
            originImgMat1.at<Vec3b>(row,col)[1] = 0;
            originImgMat1.at<Vec3b>(row,col)[2] = 0;
        }
    }
    
    UIImage *image = [CVTools2 UIImageFromCVMat:originImgMat1];
    CGImageRef imageRef = [image CGImage];
    size_t imgbitsPerComponent  = CGImageGetBitsPerComponent(imageRef );
    printf("imgbitsPerComponent 每个通道占用的位数:%zu \n",imgbitsPerComponent );
    size_t imgBitsPerPixe  = CGImageGetBitsPerPixel(imageRef);
    printf("imgBitsPerPixe 每个像素占用的位数:%zu \n\n",imgBitsPerPixe );

UIImageFromCVMat method:

+(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
    //获取矩阵数据
    NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
    //判断矩阵使用的颜色空间
    CGColorSpaceRef colorSpace;
    if (cvMat.elemSize() == 1) {
        colorSpace = CGColorSpaceCreateDeviceGray();
    } else {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    }
    //创建数据privder
    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
    
    //获取bitmpa位数
    size_t bitsPerPixel = cvMat.elemSize()*8;
    //获取通道数
    size_t channels = cvMat.channels();
    //获取通道位深
    size_t bitsPerComponent = bitsPerPixel/channels;
    
    //创建位图信息  根据通道位深及通道数判断使用的位图信息
    CGBitmapInfo bitmapInfo;
    if(bitsPerComponent == 8){
        if(channels == 3){
            bitmapInfo = kCGImageAlphaNone | kCGImageByteOrderDefault;
        }else if(channels == 4){
            bitmapInfo = kCGImageAlphaPremultipliedLast | kCGImageByteOrderDefault;
        }else{
            printf("图片格式不支持");
            abort();
        }
    }else if(bitsPerComponent == 16){
        if(channels == 3){
            bitmapInfo = kCGImageAlphaNone | kCGImageByteOrder16Little;
        }else if(channels == 4){
            bitmapInfo = kCGImageAlphaPremultipliedLast | kCGImageByteOrder16Little;
        }else{
            printf("图片格式不支持");
            abort();
        }
    }else{
        printf("图片格式不支持");
        abort();
    }
    
   

    //根据矩阵及相关信息创建CGImageRef结构体
    CGImageRef imageRef = CGImageCreate(cvMat.cols, //矩阵宽度
                                        cvMat.rows, //矩阵列数
                                        bitsPerComponent,        //通道位深
                                        8 * cvMat.elemSize(),  //每个像素位深
                                        cvMat.step[0],  //每行占用字节数
                                        colorSpace,    //使用的颜色空间
                                        bitmapInfo,//通道排序、大小端读取顺序信息
                                        provider, //数据源
                                        NULL,   //解码数组 一般传null
                                        true, //是否抗锯齿
                                        kCGRenderingIntentDefault   //使用默认的渲染方式
                                        );
    // 通过cgImage转化出来UIImage对象
    UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
    //释放imageRef
    CGImageRelease(imageRef);
    //释放provider
    CGDataProviderRelease(provider);
    //释放颜色空间
    CGColorSpaceRelease(colorSpace);
    return finalImage;
}

输出结果为:
imgbitsPerComponent 每个通道占用的位数:8 
imgBitsPerPixe 每个像素占用的位数:24 

You can see that the output is correct.

Write the image into the sandbox, then read it out and integrate it into the project to read it again. The code and output are as follows

    NSString *path = [[NSBundle mainBundle] pathForResource:@"red" ofType:@".png"];
    NSData *bundleImgData = [NSData dataWithContentsOfFile:path];
    UIImage *bundleImage = [UIImage imageWithData:bundleImgData];
    CGImageRef imgRef = [bundleImage  CGImage];
    size_t bitsPerComponent  = CGImageGetBitsPerComponent(imgRef );
    printf("bitsPerComponent 每个通道占用的位数:%zu \n",bitsPerComponent );
    size_t bitsPerPixel  = CGImageGetBitsPerPixel(imgRef);
    printf("bitsPerPixel 每个像素占用的位数:%zu \n\n",bitsPerPixel );
输出结果:
bitsPerComponent 每个通道占用的位数:8 
bitsPerPixel 每个像素占用的位数:32 

It can be seen that when reading pictures from the local, the alpha channel is added. So it can be argued [it may be that when Apple loads the most commonly used 8bpc format png and jpeg images, if the image itself does not have an alpha channel, it will add an opaque value of 255 to the image by default during the process of reading the image data from the local alpha channel] this point of view.

If you have other ideas, please leave a message to exchange and learn together. In the future, there will be more ways to use quartz 2D to complete the image cropping requirements.
Source code download: Source code download

Guess you like

Origin blog.csdn.net/mumubumaopao/article/details/130708889