The principle and optimization of iOS interface stuck

In daily development, what we encounter most is the development of UI drawing, content display and other requirements. Whether the UI display of APP is smooth or not is also the most direct feeling of users. Today, I will analyze the principle of UI interface stuck and discuss how to optimize it.

1. Caton principle

The normal rendering of the computer is smooth: image.pngthe GPU is generated by the CPU calculation and displayed on the monitor (monitor) After optimizing the performance: FrameBuffera buffer buffer is added on the basis of the principle, the display frame rate is 60fps/120fps, and the frame is taken back and forth between the two buffers .Video Controller image.png

卡顿原因: If the GPU does not generate a certain frame in time, it will run back and forth between buffer 1 and buffer 2 when the frame is displayed, waiting for the generation of this frame. At this time, the interface UI will be stuck. If this frame is not generated, the next frame generated will skip this frame and display the next frame (this is the 掉帧case).

Core question :

  • Already know the reason for Caton
  • How to monitor Caton?
  • What are the ways to monitor Caton?

1. Caton detection

What to use to monitor Caton, we use RunLoop here. We know RunLoop是主运行循环that it is possible to save the life cycle of a task. (60FPS=16.67ms=1/60)

Let's have a general understanding of RunLoop: The image.pngmain idea is to add monitoring tasks to RunLoop to monitor Vsync (vertical synchronization signal) to determine whether the UI is stuck.

YYKitThe YYFPSLabelrunning code borrowed here is shown in the figure: image.pngCreate a project NYMainThreadBlock: to feel the core idea of ​​monitoring Caton; image.pngregister a custom observer task to runloop and send a semaphore -> NYBlockMonitor NYBlockMonitor有一个子线程无限循环->等待判断自定义observer的休眠, the wake-up semaphore is used to judge the entire system The work of the runloop, because the rendering of the UI is also in the system tasks of the runloop, other tasks with high priority take up a lot of the running time of the runloop, and our custom observer task will wait for sleep, so that we can judge the UI. Is it stuck. (As shown in FIG)

Core code:

//  NYBlockMonitor.m
//  NYMainThreadBlock
//
//  Created by ning on 2022/7/12.
//

#import "NYBlockMonitor.h"
@interface NYBlockMonitor(){
    CFRunLoopActivity acticity; //状态集
}

@property (nonatomic,strong) dispatch_semaphore_t semaphore;
@property (nonatomic,assign) NSUInteger timeoutCount;

@end

@implementation NYBlockMonitor

+ (instancetype)sharedInstance
{
    static id instance = nil;
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        instance = [[self alloc] init];
    });
    return instance;
}

- (void)start
{
    [self registerObserver];//注册任务
    [self startMonitor];
}

static void CallBock(CFRunLoopObserverRef observer,CFRunLoopActivity activity,void *info)
{
    NYBlockMonitor *monitor = (__bridge NYBlockMonitor *)info;
    monitor->acticity = activity;
    //发送信号
    dispatch_semaphore_t semaphore = monitor->_semaphore;
    dispatch_semaphore_signal(semaphore); //信号+1
}

- (void)registerObserver
{
    CFRunLoopObserverContext context = {0,(__bridge  void*)self,NULL,NULL};
    //NSIntegerMax :优先级最小
    CFRunLoopObserverRef observer = CFRunLoopObserverCreate(kCFAllocatorDefault,
                                                            kCFRunLoopAllActivities,
                                                            YES,
                                                            NSIntegerMax,
                                                            &CallBock,
                                                            &context);
    CFRunLoopAddObserver(CFRunLoopGetMain(), observer, kCFRunLoopCommonModes);
}

- (void)startMonitor{
    //创建信号
    _semaphore = dispatch_semaphore_create(0);
    //在子线程监控时长
    dispatch_async(dispatch_get_global_queue(0, 0), ^{
        while (YES) //死循环监听
        {
            // 超时时间是 1 秒,没有等到信号量, st 就不等于 0 ,RunLoop 所有的任务
            long st = dispatch_semaphore_wait(self->_semaphore, dispatch_time(DISPATCH_TIME_NOW, 1 * NSEC_PER_SEC));
            if (st != 0) {
                // 即将处理 Source , 刚从休眠中唤醒 进入判断
                if (self->acticity == kCFRunLoopBeforeSources || self->acticity == kCFRunLoopAfterWaiting) {
                    if (++self->_timeoutCount < 2) {
                        NSLog(@"timeoutCount=%lu",(unsigned long)self->_timeoutCount);
                        continue;
                    }
                    // 一秒左右的衡量尺度 很大可能性连续来 避免大规模打印!
                    NSLog(@"检测到超过两次连续卡顿 - %ld",(unsigned long)self->_timeoutCount);
                }
            }
            self->_timeoutCount = 0;
        }
    });

}
@end
复制代码

running result:image.png

2. Interface optimization

1. Pre-typesetting

In the conventional MVC pattern, it is possible to calculate the size of the frame and the size of the related UI in the view layer. This takes a performance hit in UI display contamination. How to solve this problem? It is to classify the size and layout of the view into the model, and calculate the layout of the view in the child thread, which can reduce the rendering loss of the UI view.

The last small code:

@implementation NYTimeLineCellLayout

- (instancetype)initWithModel:(LGTimeLineModel *)timeLineModel
{
    if (!timeLineModel) return nil;
    self = [super init];
    if (self) {
        _timeLineModel = timeLineModel;
        [self layout];
    }
    return self;
}
- (void)setTimeLineModel:(LGTimeLineModel *)timeLineModel
{
    _timeLineModel = timeLineModel;
    [self layout];
}
- (void)layout
{
    CGFloat sWidth = [UIScreen mainScreen].bounds.size.width;
    self.iconRect = CGRectMake(10, 10, 45, 45);
    CGFloat nameWidth = [self calcWidthWithTitle:_timeLineModel.name font:titleFont];
    CGFloat nameHeight = [self calcLabelHeight:_timeLineModel.name fontSize:titleFont width:nameWidth];
    self.nameRect = CGRectMake(CGRectGetMaxX(self.iconRect) + nameLeftSpaceToHeadIcon, 17, nameWidth, nameHeight);
    CGFloat msgWidth = sWidth - 10 - 16;
    CGFloat msgHeight = 0;
    //文本信息高度计算
  //**********************省略代码***********************//
    self.height = CGRectGetMaxY(self.seperatorViewRect);
}

#pragma mark **-- Caculate Method**
- (CGFloat)calcWidthWithTitle:(NSString *)title font:(CGFloat)font {
    NSStringDrawingOptions options =  NSStringDrawingUsesLineFragmentOrigin | NSStringDrawingUsesFontLeading;
    CGRect rect = [title boundingRectWithSize:CGSizeMake(MAXFLOAT,MAXFLOAT) options:options attributes:@{NSFontAttributeName:[UIFont systemFontOfSize:font]} context:nil];
    CGFloat realWidth = ceilf(rect.size.width);
    return realWidth;
}
- (CGFloat)calcLabelHeight:(NSString *)str fontSize:(CGFloat)fontSize width:(CGFloat)width {
    NSStringDrawingOptions options =  NSStringDrawingUsesLineFragmentOrigin | NSStringDrawingUsesFontLeading;
    CGRect rect = [str boundingRectWithSize:CGSizeMake(width,MAXFLOAT) options:options attributes:@{NSFontAttributeName:[UIFont systemFontOfSize:fontSize]} context:nil];
    CGFloat realHeight = ceilf(rect.size.height);
    return realHeight;
}
- (int)caculateAttributeLabelHeightWithString:(NSAttributedString *)string  width:(int)width {
    int total_height = 0;
    CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString((CFAttributedStringRef)string);    //string 为要计算高度的NSAttributedString
    CGRect drawingRect = CGRectMake(0, 0, width, 100000);  //这里的高要设置足够大
    CGMutablePathRef path = CGPathCreateMutable();
    CGPathAddRect(path, NULL, drawingRect);
    CTFrameRef textFrame = CTFramesetterCreateFrame(framesetter,CFRangeMake(0,0), path, NULL);
    CGPathRelease(path);
    CFRelease(framesetter);
    NSArray *linesArray = (NSArray *) CTFrameGetLines(textFrame);
    CGPoint origins[[linesArray count]];
    CTFrameGetLineOrigins(textFrame, CFRangeMake(0, 0), origins);
    int line_y = (int) origins[[linesArray count] -1].y;  //最后一行line的原点y坐标
    CGFloat ascent;
    CGFloat descent;
    CGFloat leading;
    CTLineRef line = (__bridge CTLineRef) [linesArray objectAtIndex:[linesArray count]-1];
    CTLineGetTypographicBounds(line, &ascent, &descent, &leading);
    total_height = 100000 - line_y + (int) descent +1;    //+1为了纠正descent转换成int小数点后舍去的值
    CFRelease(textFrame);
    return total_height;
}
@end

//TableViewCell 添加配置NYTimeLineCellLayout 方法
- (void)configureLayout:(NYTimeLineCellLayout *)layout{
//**********************省略代码***********************//
}
复制代码

In this way, the purpose of pre-typesetting is achieved (a very simple method, everyone can think of it).

2. Precoding and decoding

In the current project development, there is another situation that will consume UI infection performance. It is the loading of pictures. Why do pictures cause a burden on the system? How to reduce the excessive consumption caused by picture loading?

UIImage *image = [UIImage imageWithContentsOfFile:@"/xxxxx.png"];
self.kcImageView.image = image;
复制代码

Run the project to check the memory usage. image.pngBut the actual image size is 31.4MB image.pngif it is changed to the following code (the downsampling method of Apple's official documentation):

// Objective-C: 大图缩小为显示尺寸的图
- (UIImage *)downsampleImageAt:(NSURL *)imageURL to:(CGSize)pointSize scale:(CGFloat)scale {
    // 利用图像文件地址创建 image source
    NSDictionary *imageSourceOptions = @{(__bridge NSString *)kCGImageSourceShouldCache: @NO // 原始图像不要解码
    };
    CGImageSourceRef imageSource =
    CGImageSourceCreateWithURL((__bridge CFURLRef)imageURL, (__bridge CFDictionaryRef)imageSourceOptions);
    // 下采样
    CGFloat maxDimensionInPixels = MAX(pointSize.width, pointSize.height) * scale;
    NSDictionary *downsampleOptions =
    @{
      (__bridge NSString *)kCGImageSourceCreateThumbnailFromImageAlways: @YES,
      (__bridge NSString *)kCGImageSourceShouldCacheImmediately: @YES,  // 缩小图像的同时进行解码
      (__bridge NSString *)kCGImageSourceCreateThumbnailWithTransform: @YES,
      (__bridge NSString *)kCGImageSourceThumbnailMaxPixelSize: @(maxDimensionInPixels)
       };
    CGImageRef downsampledImage =
    CGImageSourceCreateThumbnailAtIndex(imageSource, 0, (__bridge CFDictionaryRef)downsampleOptions);
    UIImage *image = [[UIImage alloc] initWithCGImage:downsampledImage];
    CGImageRelease(downsampledImage);
    CFRelease(imageSource);
    return image;
}
复制代码

Operational effect: image.pngReduce the system's consumption of image loading through down-sampling decoding.

3. Asynchronous rendering

What does asynchronous rendering mean, and what does asynchronous rendering do? Let's study through a case to understand: image.pngrunning the project to discover how there is only one layer. In normal opening, we create a variety of controls on the view to form an interface, and then each control has its own layer. And our case has only one layer, why is this? We slowly unravel the mystery.

Run the project to view the stack information: We see such code image.pngwherever we see useful layers . CA::Transaction::commit()What did Transaction do?

The framework that UIKit in iOS can mainly rely on to display content is shown in the figure: image.pngFrom Core Animation to GPU rendering process:image.png

  • ApplicationLayout UIKit view controls indirectly associated with Core Animation layers
  • Core Animation Layer-related data is submitted to iOS Render ServerOpenGL ES & Core Graphics
  • Render ServerWill communicate with the GPU to pass the data to the GPU after processing
  • GPU 调用iOS current device rendering related 图形设备Display

What does Commit Transaction do?

  • Layout, 构建视图, frame, traversal operations [UIView layerSubview], [CALayer layoutSubLayers]
  • Display, 绘制视图, display - drawReact(), displayLyaer: (bitmap drawing)
  • Prepare, additional Core Animation work, such as decoding
  • Commit, pack the layers and send them toRender Server

Code:

@implementation NYView
- (void)drawRect:(CGRect)rect {
    // Drawing code, 绘制的操作, BackingStore(额外的存储区域产于的) -- GPU
}

+ (Class)layerClass{
    return [NYLayer class];
}

- (void)layoutSublayersOfLayer:(CALayer *)layer
{
    [super layoutSublayersOfLayer:layer];
    [self layoutSubviews];
}

- (CGContextRef)createContext
{
    UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.layer.opaque, self.layer.contentsScale);
    CGContextRef context = UIGraphicsGetCurrentContext();
    return context;
}

- (void)layerWillDraw:(CALayer *)layer{
    //绘制的准备工作,do nontihing
}

//////绘制的操作
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx{
    [super drawLayer:layer inContext:ctx];
    [[UIColor redColor] set];
       //Core Graphics
    UIBezierPath *path = [UIBezierPath bezierPathWithRect:CGRectMake(self.bounds.size.width / 2- 20, self.bounds.size.height / 2- 20, 40, 40)];
    CGContextAddPath(ctx, path.CGPath);
    CGContextFillPath(ctx);
}
//layer.contents = (位图)
- (void)displayLayer:(CALayer *)layer{
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
    dispatch_async(dispatch_get_main_queue(), ^{
        layer.contents = (__bridge id)(image.CGImage);
    });
}
- (void)closeContext{
    UIGraphicsEndImageContext();
}
@end


@implementation NYLayer
//前面断点调用写下的代码
- (void)layoutSublayers{
    if (self.delegate && [self.delegate respondsToSelector:@selector(layoutSublayersOfLayer:)]) {
        //UIView
        [self.delegate layoutSublayersOfLayer:self];
    }else{
        [super layoutSublayers];
    }
}

//绘制流程的发起函数
- (void)display{
    // Graver 实现思路
    CGContextRef context = (__bridge CGContextRef)([self.delegate performSelector:@selector(createContext)]);
    [self.delegate layerWillDraw:self];
    [self drawInContext:context];
    [self.delegate displayLayer:self];
    [self.delegate performSelector:@selector(closeContext)];
}

@end
复制代码

Running effect: image.pngThe order of drawing: layoutSublayersOfLayer-> createContext-> layerWillDraw-> drawLayer-> displayLayer-> closeContext image.pngYou can also study the Meituan open source Graver framework: using "sculpture" to interpret the efficient rendering of the UI interface on the iOS side

Guess you like

Origin juejin.im/post/7119858546405195783