1. CPU usage
As the central processing unit of the mobile phone, the CPU can be said to be the most critical component of the mobile phone. All applications need it to schedule and run, and the resources are limited. Therefore, when our APP is not designed properly and the CPU continues to run at a high load, there will be phenomena such as APP freezes, hot mobile phones, and excessive power consumption that will seriously affect the user experience.
Therefore, our CPU
monitoring of the application occupancy rate in China will become particularly important. So how should we get the CPU share? !
We all know that our APP will correspond to one when it is running Mach Task
, and there may be multiple threads executing tasks at the same time under Task, and each thread is the basic unit for using CPU. So we can Mach Task
calculate the CPU usage of the APP by getting the current CPU usage of all threads.
This is how Mach task is described in "OS X and iOS Kernel Programming":
A task is a container object. Virtual memory space and other resources are managed through this container object. These resources include devices and other handles. Strictly speaking, the task of Mach is not the so-called process in other operating systems, because Mach, as a microkernel operating system, does not provide the logic of the "process", but only provides the most basic implementation. However, in the BSD model, these two concepts have a simple 1:1 mapping. Each BSD process (that is, OS X process) is associated with a Mach task object at the bottom.
Conceptual diagram of the process subsystem composition in Mac OS X
iOS is based on
Apple Darwin
the core bykernel
,XNU
andRuntime
composition, andXNU
aDarwin
core, which is "X is not UNIX" abbreviation, the kernel is a hybrid, a Mach microkernel and BSD composition. The Mach kernel is a lightweight platform that can only perform the most basic responsibilities of the operating system, such as: processes and threads, virtual memory management, task scheduling, process communication, and message passing mechanisms. Other tasks, such as file operations and device access, are implemented by the BSD layer.
The threading technology of iOS is similar to that of Mac OS X and is also implemented based on the Mach threading technology. In the Mach layer, the thread_basic_info
structure encapsulates the basic information of a single thread:
struct thread_basic_info {
time_value_t user_time; /* user run time */
time_value_t system_time; /* system run time */
integer_t cpu_usage; /* scaled cpu usage percentage */
policy_t policy; /* scheduling policy in effect */
integer_t run_state; /* run state (see below) */
integer_t flags; /* various flags (see below) */
integer_t suspend_count; /* suspend count for thread */
integer_t sleep_time; /* number of seconds that thread has been sleeping */
}
A Mach Task
list of threads containing it. The kernel provides task_threads
API calls to obtain the thread list of the specified task, and then you can thread_info
query the information of the specified thread through the API call, which is defined in thread_act.h.
task_threads
The target_task
all threads tasks stored in act_list
the array, act_listCnt indicates the number of threads:
kern_return_t task_threads
(
task_t target_task,
thread_act_array_t *act_list,
mach_msg_type_number_t *act_listCnt
);
thread_info
The structure is as follows:
kern_return_t thread_info
(
thread_act_t target_act,
thread_flavor_t flavor, // 传入不同的宏定义获取不同的线程信息
thread_info_t thread_info_out, // 查询到的线程信息
mach_msg_type_number_t *thread_info_outCnt // 信息的大小
);
So we get the CPU share as follows:
#import "LSLCpuUsage.h"
#import <mach/task.h>
#import <mach/vm_map.h>
#import <mach/mach_init.h>
#import <mach/thread_act.h>
#import <mach/thread_info.h>
@implementation LSLCpuUsage
+ (double)getCpuUsage {
kern_return_t kr;
thread_array_t threadList; // 保存当前Mach task的线程列表
mach_msg_type_number_t threadCount; // 保存当前Mach task的线程个数
thread_info_data_t threadInfo; // 保存单个线程的信息列表
mach_msg_type_number_t threadInfoCount; // 保存当前线程的信息列表大小
thread_basic_info_t threadBasicInfo; // 线程的基本信息
// 通过“task_threads”API调用获取指定 task 的线程列表
// mach_task_self_,表示获取当前的 Mach task
kr = task_threads(mach_task_self(), &threadList, &threadCount);
if (kr != KERN_SUCCESS) {
return -1;
}
double cpuUsage = 0;
for (int i = 0; i < threadCount; i++) {
threadInfoCount = THREAD_INFO_MAX;
// 通过“thread_info”API调用来查询指定线程的信息
// flavor参数传的是THREAD_BASIC_INFO,使用这个类型会返回线程的基本信息,
// 定义在 thread_basic_info_t 结构体,包含了用户和系统的运行时间、运行状态和调度优先级等
kr = thread_info(threadList[i], THREAD_BASIC_INFO, (thread_info_t)threadInfo, &threadInfoCount);
if (kr != KERN_SUCCESS) {
return -1;
}
threadBasicInfo = (thread_basic_info_t)threadInfo;
if (!(threadBasicInfo->flags & TH_FLAGS_IDLE)) {
cpuUsage += threadBasicInfo->cpu_usage;
}
}
// 回收内存,防止内存泄漏
vm_deallocate(mach_task_self(), (vm_offset_t)threadList, threadCount * sizeof(thread_t));
return cpuUsage / (double)TH_USAGE_SCALE * 100.0;
}
@end
2. Memory
Although the memory of mobile phones is getting larger and larger, it is limited after all. If the memory is too high due to improper application design, we may face the risk of being "killed" by the system, which is a devastating experience for users.
The memory usage information of Mach task is stored in the mach_task_basic_info
structure, where resident_size
is the size of physical memory used by the application, virtual_size
is the size of virtual memory, in task_info.h
:
#define MACH_TASK_BASIC_INFO 20 /* always 64-bit basic info */
struct mach_task_basic_info {
mach_vm_size_t virtual_size; /* virtual memory size (bytes) */
mach_vm_size_t resident_size; /* resident memory size (bytes) */
mach_vm_size_t resident_size_max; /* maximum resident memory size (bytes) */
time_value_t user_time; /* total user run time for
terminated threads */
time_value_t system_time; /* total system run time for
terminated threads */
policy_t policy; /* default policy for new threads */
integer_t suspend_count; /* suspend count for task */
};
The method of obtaining is task_info
to return target_task information according to the specified flavor type through API, in task.h
:
kern_return_t task_info
(
task_name_t target_task,
task_flavor_t flavor,
task_info_t task_info_out,
mach_msg_type_number_t *task_info_outCnt
);
The author tried to use the following methods to obtain the memory status, which is basically similar to Tencent's GT , but there is a big gap between the values of Xcode and Instruments:
// 获取当前应用的内存占用情况,和Xcode数值相差较大
+ (double)getResidentMemory {
struct mach_task_basic_info info;
mach_msg_type_number_t count = MACH_TASK_BASIC_INFO_COUNT;
if (task_info(mach_task_self(), MACH_TASK_BASIC_INFO, (task_info_t)&info, &count) == KERN_SUCCESS) {
return info.resident_size / (1024 * 1024);
} else {
return -1.0;
}
}
Later, I read a blogger discussing this issue, saying that using phys_footprint
the blog address is the correct answer . Pro-test, basically similar to the value of Xcode.
// 获取当前应用的内存占用情况,和Xcode数值相近
+ (double)getMemoryUsage {
task_vm_info_data_t vmInfo;
mach_msg_type_number_t count = TASK_VM_INFO_COUNT;
if(task_info(mach_task_self(), TASK_VM_INFO, (task_info_t) &vmInfo, &count) == KERN_SUCCESS) {
return (double)vmInfo.phys_footprint / (1024 * 1024);
} else {
return -1.0;
}
}
Bo mentioned in the main text: With regard to phys_footprint
the definition can be found in XNU source code, find osfmk/kern/task.c
where to phys_footprint
comment, bloggers believe that the physical memory Formula Notes mentioned in the application is calculated to be actually used.
/*
* phys_footprint
* Physical footprint: This is the sum of:
* + (internal - alternate_accounting)
* + (internal_compressed - alternate_accounting_compressed)
* + iokit_mapped
* + purgeable_nonvolatile
* + purgeable_nonvolatile_compressed
* + page_table
*
* internal
* The task's anonymous memory, which on iOS is always resident.
*
* internal_compressed
* Amount of this task's internal memory which is held by the compressor.
* Such memory is no longer actually resident for the task [i.e., resident in its pmap],
* and could be either decompressed back into memory, or paged out to storage, depending
* on our implementation.
*
* iokit_mapped
* IOKit mappings: The total size of all IOKit mappings in this task, regardless of
clean/dirty or internal/external state].
*
* alternate_accounting
* The number of internal dirty pages which are part of IOKit mappings. By definition, these pages
* are counted in both internal *and* iokit_mapped, so we must subtract them from the total to avoid
* double counting.
*/
Of course, I also agree with this>.< .
3. Start-up time
The startup time of the APP directly affects the user's first experience and judgment of your APP. If the startup time is too long, not only will the experience plummet, but it may also inspire Apple’s watch dog mechanism to kill your app. It’s a tragedy. Users will feel that the app freezes and crashes as soon as it starts. Long press the APP and click the delete button. (Xcode does not enable watch dog in debug mode, so we must connect to the real machine to test our APP)
Before measuring the startup time of the APP, let's first understand the startup process of the APP:
APP startup process
The start of the APP can be divided into two stages, namely main()
before main()
execution and after execution. Summarized as follows:
t(App total startup time) = t1(
main()
load time before) + t2(main()
load time afterwards).
- t1 = loading time of the system's dylib (dynamic link library) and App executable file;
- t2 = The period of time
main()
after the function is executed to the endAppDelegate
of theapplicationDidFinishLaunching:withOptions:
method in the class .
So we start from these two stages to obtain and optimize the APP startup time. Let's take a look at main()
how to get the startup time before the function is executed.
Measure the time before the main() function is executed
For measuring the time before main() that is time1, Apple officially provides a method, that is, when debugging on the real machine, check the DYLD_PRINT_STATISTICS
option (you can use it if you want to get more detailed information DYLD_PRINT_STATISTICS_DETAILS
), as shown below:
before main() function
The output is as follows:
Total pre-main time: 34.22 milliseconds (100.0%)
dylib loading time: 14.43 milliseconds (42.1%)
rebase/binding time: 1.82 milliseconds (5.3%)
ObjC setup time: 3.89 milliseconds (11.3%)
initializer time: 13.99 milliseconds (40.9%)
slowest intializers :
libSystem.B.dylib : 2.20 milliseconds (6.4%)
libBacktraceRecording.dylib : 2.90 milliseconds (8.4%)
libMainThreadChecker.dylib : 6.55 milliseconds (19.1%)
libswiftCoreImage.dylib : 0.71 milliseconds (2.0%)
System-level dynamic link libraries, because Apple has optimized them, do not take much time. Most of the time, most of the time of t1 will be spent on the code in our own App and linking to third-party libraries.
So how can we reduce the time before main() is called? The points we can optimize are:
- Reduce unnecessary
framework
, especially third-party ones, because dynamic links are time-consuming;check framework
Should be set tooptional
sumrequired
, if itframework
exists in all iOS system versions supported by the current App, then set itrequired
, otherwise set itoptional
, because thereoptional
will be some additional checks;- Merge or delete some OC classes. Regarding the cleanup of unused classes in the project, you can use AppCode code inspection tools:
- Delete some useless static variables
- Delete methods that have not been called or have been deprecated
- Will not have to
+load
do things in a way to delay+initialize
the- Try not to use C++ virtual functions (there is overhead to create a virtual function table)
Measure the time consumed after the main() function is executed
The second stage of time-consuming statistics, we think it is from the main ()
execution to the applicationDidFinishLaunching:withOptions:
end of the method, then we can make statistics by means of management.
The Objective-C project has a main file, so we can get it directly by adding code:
// 1. 在 main.m 添加如下代码:
CFAbsoluteTime AppStartLaunchTime;
int main(int argc, char * argv[]) {
AppStartLaunchTime = CFAbsoluteTimeGetCurrent();
.....
}
// 2. 在 AppDelegate.m 的开头声明
extern CFAbsoluteTime AppStartLaunchTime;
// 3. 最后在AppDelegate.m 的 didFinishLaunchingWithOptions 中添加
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(@"App启动时间--%f",(CFAbsoluteTimeGetCurrent()-AppStartLaunchTime));
});
Everyone knows that the Swift project does not have a main file. The official explanation is as follows:
In Xcode, Mac templates default to including a “main.swift” file, but for iOS apps the default for new iOS project templates is to add @UIApplicationMain to a regular Swift file. This causes the compiler to synthesize a mainentry point for your iOS app, and eliminates the need for a “main.swift” file.
In other words, by adding @UIApplicationMain
flags, it helped us add the mian function. So if we need to do some other operations in the mian function, we need to create the main.swift file ourselves, which is also allowed by Apple.
-
- Delete
AppDelegate
class@UIApplicationMain
mark;
- Delete
-
- Create the main.swift file by yourself and add the program entry:
import UIKit
var appStartLaunchTime: CFAbsoluteTime = CFAbsoluteTimeGetCurrent()
UIApplicationMain(
CommandLine.argc,
UnsafeMutableRawPointer(CommandLine.unsafeArgv)
.bindMemory(
to: UnsafeMutablePointer<Int8>.self,
capacity: Int(CommandLine.argc)),
nil,
NSStringFromClass(AppDelegate.self)
)
-
didFinishLaunchingWithOptions :
Add at the end of the AppDelegate method:
// APP启动时间耗时,从mian函数开始到didFinishLaunchingWithOptions方法结束
DispatchQueue.main.async {
print("APP启动时间耗时,从mian函数开始到didFinishLaunchingWithOptions方法:\(CFAbsoluteTimeGetCurrent() - appStartLaunchTime)。")
}
Optimization after the main function:
- Try to use pure code to reduce the use of xib;
- Whether all network requests in the startup phase are included in asynchronous requests;
- Can some time-consuming operations be executed later or asynchronously?
4. FPS
From Wikipedia, we know that FPS
is Frames Per Second
the abbreviation of is, which means the number of frames transmitted per second, which is what we often call "refresh rate (unit: Hz).
FPS
It measures the amount of information used to save and display dynamic videos. The more frames per second, the smoother the displayed picture, and FPS
the lower the value, the more lag, so this value can be used to measure the performance of the image rendering process to a certain extent. Generally, FPS
as long as our APP is kept between 50-60, the user experience is relatively smooth.
The normal refresh rate of the Apple mobile phone screen is 60 times per second, which can be understood as a FPS
value of 60. We all know that it CADisplayLink
is consistent with the screen refresh frequency, so can we monitor ours through FPS
it? !
First CADisplayLink
what
CADisplayLink
IsCoreAnimation
provided by another similarNSTimer
class, it always starts before the completion of an update on the screen, and its interface designNSTimer
is very similar, so it is actually a built-in alternative implementations, but andtimeInterval
in seconds different,CADisplayLink
there An integerframeInterval
attribute that specifies the number of frames before execution. The default value is 1, which means it will be executed every time the screen is updated. But if the animation code is executed for more than one-sixtieth of a second, you can specify itframeInterval
as 2, which means that the animation is executed every other frame (30 frames per second).
Use CADisplayLink
the FPS
value of the monitoring interface , refer to YYFPSLabel :
import UIKit
class LSLFPSMonitor: UILabel {
private var link: CADisplayLink = CADisplayLink.init()
private var count: NSInteger = 0
private var lastTime: TimeInterval = 0.0
private var fpsColor: UIColor = UIColor.green
public var fps: Double = 0.0
// MARK: - init
override init(frame: CGRect) {
var f = frame
if f.size == CGSize.zero {
f.size = CGSize(width: 55.0, height: 22.0)
}
super.init(frame: f)
self.textColor = UIColor.white
self.textAlignment = .center
self.font = UIFont.init(name: "Menlo", size: 12.0)
self.backgroundColor = UIColor.black
link = CADisplayLink.init(target: LSLWeakProxy(target: self), selector: #selector(tick))
link.add(to: RunLoop.current, forMode: RunLoopMode.commonModes)
}
deinit {
link.invalidate()
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
// MARK: - actions
@objc func tick(link: CADisplayLink) {
guard lastTime != 0 else {
lastTime = link.timestamp
return
}
count += 1
let delta = link.timestamp - lastTime
guard delta >= 1.0 else {
return
}
lastTime = link.timestamp
fps = Double(count) / delta
let fpsText = "\(String.init(format: "%.3f", fps)) FPS"
count = 0
let attrMStr = NSMutableAttributedString(attributedString: NSAttributedString(string: fpsText))
if fps > 55.0{
fpsColor = UIColor.green
} else if(fps >= 50.0 && fps <= 55.0) {
fpsColor = UIColor.yellow
} else {
fpsColor = UIColor.red
}
attrMStr.setAttributes([NSAttributedStringKey.foregroundColor:fpsColor], range: NSMakeRange(0, attrMStr.length - 3))
attrMStr.setAttributes([NSAttributedStringKey.foregroundColor:UIColor.white], range: NSMakeRange(attrMStr.length - 3, 3))
DispatchQueue.main.async {
self.attributedText = attrMStr
}
}
}
The passed CADisplayLink
implementation method and the real machine test can indeed meet the monitoring FPS
business requirements to a large extent and provide a reference for improving the user experience, but the value of Instruments may be somewhat different. Let's discuss CADisplayLink
the methods used and possible problems.
- (1). There are discrepancies in the comparison with the Instruments value for the following reasons:
CADisplayLink
It runs in the addedRunLoop
one (usually in the main thread), so it can only detect the currentRunLoop
frame rate.RunLoop
The scheduling timing of the tasks managed in, is affected by where the tasks areRunLoopMode
and how busy the CPU is. Therefore, if you want to really locate the exact performance problem, it is best to confirm it through Instrument.
- (2). Use of
CADisplayLink
possible circular references .
For example:
let link = CADisplayLink.init(target: self, selector: #selector(tick))
let timer = Timer.init(timeInterval: 1.0, target: self, selector: #selector(tick), userInfo: nil, repeats: true)
Reason : The above two usages will have strong references to self. At this time, the timer holds self and self also holds the timer. When the circular reference causes the page to be dismissed, both parties cannot release it, resulting in a circular reference. At this time, using weak can not be effectively solved:
weak var weakSelf = self
let link = CADisplayLink.init(target: weakSelf, selector: #selector(tick))
So how should we solve this problem, some people will say that the timer method is called in deinit
(or dealloc
) invalidate
, but this is invalid, because it has caused a circular reference, and will not come to this method.
YYKit
The solution provided by the author is to use YYWeakProxy , this YYWeakProxy
is not inherited NSObject
but inherited NSProxy
.
NSProxy
An abstract superclass defining an API for objects that act as stand-ins for other objects or for objects that don’t exist yet.
NSProxy
It is an abstract parent class that defines interfaces for objects, and plays a stand-in role for other objects or some non-existent objects. For details, please refer to the official documents of NSProxy. The
modified code is as follows, the pro-test timer is released as LSLWeakProxy
desired , and the specific implementation code has been synchronized to github .
let link = CADisplayLink.init(target: LSLWeakProxy(target: self), selector: #selector(tick))
5. Caton
Before understanding the cause of the lag, let's look at the principle of the screen display image.
The principle of the screen display image:
Principles of Screen Drawing
Current mobile devices basically use double buffer + vertical synchronization (ie V-Sync) screen display technology.
As shown above, in the system CPU
, GPU
and the display is the display operation collaboratively. Which CPU
is responsible for calculating the displayed content, such as view creation, layout calculation, picture decoding, text drawing, etc. Then CPU
the contents submitted to the calculated GPU
by GPU
transforming, synthesizing, rendering. GPU
A frame will be rendered in advance and put into a buffer for the video controller to read. When the next frame is rendered, GPU
the pointer of the video controller will directly point to the second container (double buffer principle). Here, it GPU
will wait for the display VSync
(ie vertical synchronization) signal to be sent before performing a new frame rendering and buffer update (this can solve the screen tearing phenomenon and increase the smoothness of the screen, but it needs to consume more computing resources. , It will also bring some delays).
The cause of the lag:
Drop frame
Based on the principle shown on the screen above, a mobile device with a vertical synchronization mechanism. If at a VSync
time, CPU
or GPU
did not complete as content submission, that frame will be discarded, waiting for the next opportunity to show, and then the contents of the previous display will remain unchanged. For example, adding code that prevents the main thread from responding to clicks, sliding events, and hinders the UI drawing of the main thread is added to the main thread, which are common causes of jams.
Caton monitoring:
Caton monitoring generally has two implementation schemes:
-
(1). The main thread is stuck to monitor . The main thread is monitored by the sub-thread to
runLoop
determine whether the time-consuming between the two state areas reaches a certain threshold. -
(2).
FPS
Monitoring . To maintain smooth UI interaction, the App refresh rate should be kept at 60fps.FPS
The principle of monitoring implementation has been discussed above and skipped here.
In use FPS
practice to monitor performance, we found FPS
the value of jitter is larger, resulting in detection Caton more difficult. In order to solve this problem, by detecting the time of each execution of the message loop of the main thread, when this time is greater than a prescribed threshold, it is recorded as a jamming to monitor .
This is also the performance monitoring Hertz solution adopted by Meituan ’s mobile terminal . The WeChat team also put forward a similar solution in the course of practice— WeChat Reading iOS Performance Optimization Summary .
Meituan Hertz program flow chart
The proposal is based on the fact that the Sources event or other interactive events triggered by scrolling is always executed quickly, and then enters the kCFRunLoopBeforeWaiting state; if a jam occurs during the scrolling process, then RunLoop will inevitably maintain kCFRunLoopAfterWaiting or kCFRunLoopBeforeSources. One of two states.
So the first solution to monitor the main thread stuck:
Create a sub-thread, and then calculate in real time whether the time between the two state areas of kCFRunLoopBeforeSources and kCFRunLoopAfterWaiting exceeds a certain threshold to determine the stalling situation of the main thread.
However, because the RunLoop of the main thread is basically in the Before Waiting state when it is idle, this results in that even if no stalling occurs, this detection method can always determine that the main thread is in the stalled state.
To solve this problem cold God ( South gardenia pour cold ) are given their own solutions, Swift
Caton detect third-party ANREye . The general idea of this set of stall monitoring program is: create a sub-thread to perform loop detection, set the flag to be each time YES
, and then dispatch tasks to the main thread to set the flag to be NO
. Then the sub-thread sleep timeout threshold duration, to determine whether the flag is successfully set to NO
, if it does not indicate that the main thread is stuck.
Combining this set of solutions, when the main thread is in the Before Waiting state, it handles the jam detection under normal conditions by dispatching tasks to the main thread to set the flag bit:
#define lsl_SEMAPHORE_SUCCESS 0
static BOOL lsl_is_monitoring = NO;
static dispatch_semaphore_t lsl_semaphore;
static NSTimeInterval lsl_time_out_interval = 0.05;
@implementation LSLAppFluencyMonitor
static inline dispatch_queue_t __lsl_fluecy_monitor_queue() {
static dispatch_queue_t lsl_fluecy_monitor_queue;
static dispatch_once_t once;
dispatch_once(&once, ^{
lsl_fluecy_monitor_queue = dispatch_queue_create("com.dream.lsl_monitor_queue", NULL);
});
return lsl_fluecy_monitor_queue;
}
static inline void __lsl_monitor_init() {
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
lsl_semaphore = dispatch_semaphore_create(0);
});
}
#pragma mark - Public
+ (instancetype)monitor {
return [LSLAppFluencyMonitor new];
}
- (void)startMonitoring {
if (lsl_is_monitoring) { return; }
lsl_is_monitoring = YES;
__lsl_monitor_init();
dispatch_async(__lsl_fluecy_monitor_queue(), ^{
while (lsl_is_monitoring) {
__block BOOL timeOut = YES;
dispatch_async(dispatch_get_main_queue(), ^{
timeOut = NO;
dispatch_semaphore_signal(lsl_semaphore);
});
[NSThread sleepForTimeInterval: lsl_time_out_interval];
if (timeOut) {
[LSLBacktraceLogger lsl_logMain]; // 打印主线程调用栈
// [LSLBacktraceLogger lsl_logCurrent]; // 打印当前线程的调用栈
// [LSLBacktraceLogger lsl_logAllThread]; // 打印所有线程的调用栈
}
dispatch_wait(lsl_semaphore, DISPATCH_TIME_FOREVER);
}
});
}
- (void)stopMonitoring {
if (!lsl_is_monitoring) { return; }
lsl_is_monitoring = NO;
}
@end
Among them LSLBacktraceLogger
is the class for obtaining stack information, see code Github for details .
The print log is as follows:
2018-08-16 12:36:33.910491+0800 AppPerformance[4802:171145] Backtrace of Thread 771:
======================================================================================
libsystem_kernel.dylib 0x10d089bce __semwait_signal + 10
libsystem_c.dylib 0x10ce55d10 usleep + 53
AppPerformance 0x108b8b478 $S14AppPerformance25LSLFPSTableViewControllerC05tableD0_12cellForRowAtSo07UITableD4CellCSo0kD0C_10Foundation9IndexPathVtF + 1144
AppPerformance 0x108b8b60b $S14AppPerformance25LSLFPSTableViewControllerC05tableD0_12cellForRowAtSo07UITableD4CellCSo0kD0C_10Foundation9IndexPathVtFTo + 155
UIKitCore 0x1135b104f -[_UIFilteredDataSource tableView:cellForRowAtIndexPath:] + 95
UIKitCore 0x1131ed34d -[UITableView _createPreparedCellForGlobalRow:withIndexPath:willDisplay:] + 765
UIKitCore 0x1131ed8da -[UITableView _createPreparedCellForGlobalRow:willDisplay:] + 73
UIKitCore 0x1131b4b1e -[UITableView _updateVisibleCellsNow:isRecursive:] + 2863
UIKitCore 0x1131d57eb -[UITableView layoutSubviews] + 165
UIKitCore 0x1133921ee -[UIView(CALayerDelegate) layoutSublayersOfLayer:] + 1501
QuartzCore 0x10ab72eb1 -[CALayer layoutSublayers] + 175
QuartzCore 0x10ab77d8b _ZN2CA5Layer16layout_if_neededEPNS_11TransactionE + 395
QuartzCore 0x10aaf3b45 _ZN2CA7Context18commit_transactionEPNS_11TransactionE + 349
QuartzCore 0x10ab285b0 _ZN2CA11Transaction6commitEv + 576
QuartzCore 0x10ab29374 _ZN2CA11Transaction17observer_callbackEP19__CFRunLoopObservermPv + 76
CoreFoundation 0x109dc3757 __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 23
CoreFoundation 0x109dbdbde __CFRunLoopDoObservers + 430
CoreFoundation 0x109dbe271 __CFRunLoopRun + 1537
CoreFoundation 0x109dbd931 CFRunLoopRunSpecific + 625
GraphicsServices 0x10f5981b5 GSEventRunModal + 62
UIKitCore 0x112c812ce UIApplicationMain + 140
AppPerformance 0x108b8c1f0 main + 224
libdyld.dylib 0x10cd4dc9d start + 1
======================================================================================
The second option is CADisplayLink
to realize the combination
When detecting the FPS value, we introduced the CADisplayLink
usage method in detail . Here, you can also monitor whether the FPS value is continuously lower than a certain value.
Author: Green apple orchard
link: https: //www.jianshu.com/p/95df83780c8f
Source: Jane books
are copyrighted by the author. For commercial reprints, please contact the author for authorization. For non-commercial reprints, please indicate the source.