UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?

UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?

UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?Guest introduction: Liu Cheng, UC technical expert, entered UC for internship in 2011, and has formally joined UC since 2012. Participated in the development of multiple Android client products such as UC Browser MTK, UC Browser HD, UC Browser TV, UC Browser Chinese and International Versions. Good at solving some framework design and app performance optimization related work. Currently, he is mainly responsible for performance optimization related work such as UC browser startup and freeze.

Startup is very important for APP. This article will first introduce the background of startup and what problems usually face when making an APP, then introduce the optimization plan we adopt, and finally introduce what kind of performance monitoring system is needed to maintain the existing results .

What does it mean to start?

Some people may think that activation is not important, but there are three levels to explain its importance:

The first is that it is a word-of-mouth communication for users, because for users, it is the user’s first recognition of the app. If your first impression is bad for it, it may affect users’ perception of it. Word of mouth.

The second is that many apps now need to do some cooperation. The premise of these cooperation is that we need to call each other. If your start-up speed is relatively slow, many partners are unwilling to cooperate with you, because it is slow to bring up your page on its interface, which will affect its entire experience closed loop, which means that the start-up speed will increase It is the most basic bargaining chip for cooperation.

Third, there are some products that analyze the relationship between startup speed and daily retention. I don’t have a specific data here, but a special mobile reporting company has analyzed it. The daily retention of mobile is related to the startup performance of the app, card performance, etc., so we still need to start Performance is better.

Although what I share today is mainly the optimization experience for the Android platform, I believe that the content can also be used as a reference for other platforms.
UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?

figure 1

To understand startup, we must first understand what the APP does during startup. Startup is the process of cooperation between the system and our application process. As shown in Figure 1, the above are some things done by the system process. We focus on some things done by the following Application Process.

The Android platform always reserves those processes when our application starts. The green part is something that the system gives us to do, and then the length of time to pay attention to is mainly from the beginning of the Application process until the end of the startup time.

So what is the start and end? In fact, the system level gives a Displayed Time, but this time does not indicate that the application is started in many cases. It just means that our Activity and its Windows are already drawn, but the UI we apply to the business is actually not It is fully displayed, so it provides API for us to call in version 4.4. At these two time points, we will have the following two corresponding Logs to show the startup time when we start, which is a process of our startup process.
UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?
figure 2

What we are concerned about now is what we can do. What we can do is the green part above. The non-green part is done for us by the system. We can only insert some of our things in it to cooperate to complete our startup. The interface used by real users goes.

Generally speaking, we do something like this in these callbacks: do something like MultiDex HotFix or BaseLibrary in Application attachBaseContext. Do some third library loading or some basic library loading in onCreate. So when our business APP is launched, we can concentrate on UI or some data loading.

We can abstract this matter: we started doing a bunch of things, and these things have an implicit relationship. Its implicit relationship, for example, the system reserves this interface restrictively. It has a sequence. For example, we load a third-party library. When we load our UI in the application, we may directly use the third-party library. Something up.

UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?
image 3

After knowing the relationship of the startup process, how can we optimize it?

We used to optimize the startup process more routinely, from beginning to end to manage and pinch points. For example, when we use that system tool to pinch a point, we find that some tasks are time-consuming (such as task2, task3), can we move it to other threads to do it; or some tasks do not need to be used at all (Such as task5), can we not do it during the startup process, so that the entire UI thread is shortened.

But we cannot ignore one point, that is, these tasks have implicit dependencies. For example, this task has task4 to do, but it depends on task2. If we move task2 to other threads to do it, this dependency is still necessary Maintain it. Including task3 is the same, you move to another thread to do it, you also have to maintain these same dependencies, this maintenance process is also a relatively large workload. Another example is task5 if this task is not done, we will do it later and so on.

This optimization scheme can indeed bring some start-up speed improvements. For example, the tasks and load of our UI thread have become easier, but it also causes more problems. As I just said, one is the thread Maintenance of the time, one is that our tasks are left behind, which will cause pressure on the tasks after the start, and will also cause the subsequent tasks to freeze.

Let's summarize what problems we will encounter:

The first is that the code complexity caused by concurrency is getting higher and higher. How to solve this change?

The second is how to deal with concurrency dependencies? Although concurrency is multi-threaded, there is a dependency process during execution. The UI thread may depend on another thread to do a thing. It is impossible to say that this dependency builds a lock and that dependency builds a callback.

The third is that every time we find a problem and throw it into the business thread, it is possible at the beginning, but in the long run, it may cause a thread bottleneck. This improper operation may also cause the thread to drop, because the CPU is limited after all. If you throw enough, its CPU switching will consume more performance.

The fourth is that if we put the task after startup, the stall problem after startup will increase or there is an obvious outstanding problem, how should we deal with it.

The fifth is that when we were doing the entire startup, we found that the previous optimization was an afterthought. Can we stop this problem before it happened, instead of waiting until there is a problem, and then check Or monitor the problem.

The last one is the fact that UC browser has been developed for so many years. As a large-scale APP, it has been optimized for many rounds, but every time it is done, it is done after one round, and after iterating several inventions, It is found that the data is declining, which is a problem that our large apps often face. This has led to our optimization work being carried out over and over again.
UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?

Figure 4

In response to these problems, I proposed several measures, as shown in Figure 4. The first four of them belong to the startup. I made it into a loading framework. The following single-point optimization is some optimization schemes for our realization part, including the entire optimization scheme. It is possible to postpone some tasks or adjust (Because we will produce scheduling), affect the follow-up business lag, how should we deal with it, including its thread monitoring.

Boot loading framework

Since it is a bootloader framework, what should we load? How to load? As we said at the beginning, starting is a lot of tasks, and then loading is nothing more than two points, one is concurrent loading and the other is delayed loading. If you have to do all of your tasks, you can only do this. Now is to see how we can load more efficiently and gracefully. Our first is to start the process of task atomization, such as the task just now, we atomize it. To solve its dependency problem, we make one of its dependencies. At least after you atomize, each of your tasks is a one-way dependency and will not become a circular dependency.

The second is that we will split some tasks as much as possible. For example, if 6 and 10 were one before, after disassembling it, the 9 task can be executed very early instead of waiting for 6 and 10 to complete it together. This indirectly improves the task. The execution speed of the link makes it possible to shorten the entire effective critical path. If it is concurrent, we shorten the critical path, and of course the startup speed is also shortened.

In the final analysis, the startup process is to make the relationship between these tasks an itemized ubuntu. The purpose of atomization is to resolve its dependencies and then minimize their task dependencies. Of course, this is an empirical value. The principle is to let at least each task. For maintenance, we must cluster the business. For better scheduling, we control the granularity and its time-consuming within 200 milliseconds.

As mentioned earlier, during the startup process, it is assumed that our business continues to iterate, and then users continue to change, continue to load, or continue to be asynchronous, resulting in our entire startup process that is not well controlled. Now in order to better control the startup process, we want to describe the startup process with a kind of data, so that when we change the startup process in the future, we will change the startup data. As I just said, we start an ubuntu with an item, we change it in the future, it is equivalent to changing this map, we don't need to change during the startup process, because there is a special loader to automatically decompose this map. Let me give the specific description roughly:
UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?

Figure 5

For example, in this block (Figure 5), it is add (a), that is, when we add a task that has no dependencies, it can be run directly. Red b, for example, it can only run on the UI thread, because there are some UI-related rendering work, it must be done on the UI thread, we are add (b), followed by a true. For example, it has a dependency relationship, such as c, which has a dependency relationship, and then depends (a) or d, which depends on two, and has a multi-parameter API interface that depends on b and c. It is worth noting that we have the latter interface here, which is a barrier interface. It may be that there is an interface in Java that uses the concept of barrier.

The significance of this interface is that sometimes some tasks, such as user authorization to open permissions when starting, this kind of user does not give it, your task can not run down, so we may set a fence there to put it Stop, for example, the following f is the type of being stopped. In this case, we want to change in the future, we want to change during the startup process, just change the red part.

The second part, config, is the feature for us to configure the project. One of them is that we can make all tasks run on the UI thread, and we can also set parameters such as the size of the thread value. With these functions, we can use the start command of the adb shell to specify your startup sequence during the startup process, that is, you can let a run first, and you can let b run first. This sequence is actually a part of this picture. One of the topological sequences, which is what we are talking about can be described, which will reduce our maintenance costs in the future, and we can maintain this data during our startup process.

There is another point. For example, we talked about a question, is it necessary to load all tasks during startup? Obviously it is not needed. First, not everyone uses all services. Maybe some people only use novels, and some only use videos. If all are loaded, unnecessary memory is wasted, so the startup time is certain. Is too long.
UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?

Image 6

To solve this problem, we mentioned a scenario where we put all tasks in a task pool. When the user enters, he may open a web page, or he may enter a browser to watch a video. Of course, other apps also have various scenes, and each scene will have an entrance. For Android, it has many component entries, so each component entry only needs to care about the smallest startup set in its entry.

This has several advantages, the first is that it can save unnecessary loading tasks. The second advantage is that if our service component gets up in the background, then it has a common task C with MainActivity. If it gets up, when the user may enter this interface, it will omit C execution It also shortens the startup time to a certain extent, and this is the scene.
UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?

Figure 7

We just said that it is a dependent, how do we go to concurrency? The previous topological decomposition step is used here, that is, we start from the point where the in-degree is 0.

What does it mean that the in-degree is 0? For example, if 1 does not have an arrow pointing to it, it is 0, and the out degree is an arrow that goes out, that is, 1 has two. If we start from the point where the in-degree is 0, it means that if we find the point that has no dependencies, it can be executed. For example, if it is 1, it can be executed. The red part must be done on the UI thread. I will show you this decomposition step. After 1 is done, it will naturally disappear the two outgoing edges. Then, if it is changed to 3, its in-degree will be 0, and then 2, if it is red, it will enter the UI thread to run , And then after 2 is done, it will disappear. I didn’t do 3 here, but you can see that 4 can’t be done at this time, because it still has dependencies, so it has to wait until 3 is done, then 3 After the two edges disappear, 4 and 5 go concurrent. The rest is the same, until this project is broken down, we start it and finish it.

There are several advantages here. First, it solves the problem of relying on concurrency. The second is that our thread is controlled by us, and the order in which we enter this thread is dynamically determined by ourselves depending on how busy each thread is. If we used to control at different points, this threading model may not be as uniform as it is now, so this is also our solution to the excessive problem of concurrency dependencies.

Although concurrency can shorten the path we start to a certain extent, it also introduces a new problem. From the perspective of UC, there are nearly 30 startup tasks, but coupled with its dependencies, it is currently topologically sorted, and there are probably more than 70,000 topological orders. If we want to ensure that our startup is 100% online and there is no problem, it means that we have to test the 70,000 startup sequence to make sure that there is no problem, we can be sure that our startup is no problem. Obviously it is difficult to test this. The first is that the workload is too large. The second is that our local models simply cannot simulate 70,000 startup sequences. So how do we ensure that this is online? Quality?
UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?

Figure 8

There are four steps to go online gradually. The first one is that we do some automation locally, because we just said that we can specify a startup sequence through the adb command, we can make it randomly, or we can pre-run here to ensure that most of our startup is No problem. Let's go to the gray scale in a small range. The purpose of the gray scale is to collect the possible starting sequences on the line.

After collecting, we went to analyze these sequences. We found the data at the time. Among our total of hundreds of thousands of data, the PV of the Top300 sequence accounted for 98%, which means that we only need to verify the first 300 startup sequences. , At least 98% of the number of online activations can be guaranteed. For the remaining 2%, we will not say that it has passed this test, because the 28th principle is too low.

We look at it through online monitoring. If it has any abnormal data feedback, we will verify its data specifically to make this kind of guarantee. In the end, we still keep it, because we have the ability to execute our startup sequence serially. The third step is to find that the abnormal rate of thread monitoring exceeds a threshold, and we may switch our concurrent path back. Going online through these four methods is guaranteed.

Single point optimization

UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?
Figure 9

We just optimized our startup process from the scheduling aspect. In fact, there are problematic codes in the startup process, or good new system features and new methods are not used, etc. to solve these aspects. There will be some conventional optimization schemes, as shown in Figure 9.

It is worth mentioning that the second one, the super class is pre-loaded and pre-started. Pre-loading of super classes is more effective for UC, because UC has been developed for so many years, and there is a similar type with more than tens of thousands of lines. For this class, new takes a very long time. So when the CPU is idle, we can pre-categorize this class in the background. When this class is actually used later, it can be new soon. The third is the pre-start point, which is actually more effective, that is, our process can pre-do part of the startup transaction when it is up in the background. In this way, when the real user uses it, there are fewer things to start. There are also some solutions behind, such as IO centralized in a background thread, starting memory data not to land frequently, and then some main dex problems. This is an optimization from the implementation method.

The other is whether we can do some optimization in experience, interaction, and vision after we have done the real optimization. For example, if you start faster, you click on it, and then start a startup page, and then enter the business box Go, the feeling of flashing is actually not very good, you might as well give the user the feeling that I click directly, and the feeling to the user is that I enter the browser, including the second and third points. You can also Try it, this is some experience.
UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?

Picture 10

Figure 10 is a data description of some effect optimization after completion. The red line is the data of UC quick opening ratio. The quick opening ratio is defined in this way, which is the proportion of users and PVs that start within 2 seconds in a large amount of online data. We have seen that the performance improvement of scene-based startup, concurrent startup, hot startup, and other optimizations is a continuous process of increasing. Its contribution ratio can be shown in the pie chart in the figure. Scene-oriented and concurrent startup are Relatively high, so you can do these two first to do your own optimization.
UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?

Picture 11

Figure 11 is the UC market data. We made this optimization since mid-June last year and have been around December. You can see that our quick opening ratio is on the rise, and it has been maintained afterwards, including the following Some slow opening ratio (that is, a start data greater than 4 seconds). We have not done optimization for more than half a year, but it has been maintained at that level, and there is no need to do this optimization again and again as before.

Stuttering after startup

UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?
Picture 12

We just said that we have a big way to make the startup scene, reserve some tasks to load later, so that there will be some negative experience effects, that is, after startup, it may be more lagging than before. How to solve this What? In fact, in the final analysis, the card after startup is because it has too many transactions after startup. There is also a framework to solve the affairs after startup. For example, after startup, we will not open many threads, but focus on two threads. The UI thread does not need to be opened, that is, one thread is opened in the background. Then these tasks, such as the above (Figure 12), the tasks in black font must be done in the UI thread, the tasks in the blue font below can be done in non-UI threads, we will try our best to place tasks in non-UI threads. The UI thread does it to reduce the load of the UI thread and respond more to user operations. In this way, when our task is thrown into the UI thread, it will not blindly go in like before. After entering, it will retain a degree. For example, after task1 and task2 enter, we will fit the UI thread, how much of these two tasks, and how much time has not responded to user events. If it exceeds a threshold, don't put it in it again. We just wait for it, the UI thread, to be idle, and install the third one in the idle state.

For background threads, it does not have this restriction, because it does not affect the user's response, we can just throw the task in. Everyone will have a question, because we have stretched the task line that was started to reach the UI thread controls. What if the user uses the following tasks at this time? For example, when a user clicks on a novel at this time, is he going to enter the novel? At this time, if we are on the UI thread, we directly remove it from the queue to load it. For some services running in a background thread, it depends on the time-consuming itself. For example, if task9, historical data shows that it is not time-consuming, we will directly move it to the UI thread to do it. For example, if the user uses task10, it is time-consuming. We cannot put it on the UI thread because it may cause the ANR, so we first remove it from the queue and put it in the background thread to do it. At the same time, the UI thread goes to the chrysanthemum and so on. After a lot of things are done, it will notify the user to respond to some of its operations. This is some of the rules that we will do later.

Performance monitoring-startup speed

The above is optimization level, and then some monitoring level. The monitoring is divided into two monitorings, one is to monitor the startup speed, and the other is to monitor the freeze after startup. There will be several problems with the monitoring of the startup speed. We still go back to the old problem, which is how to find the problem earlier, instead of waiting for the online to make a remedial measure. The second is if we find that the startup speed is slowing down, how can we quickly locate the problem and which code is causing it. The third is how our startup monitoring code solves the problem of the original code.
UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?
Figure 13

In the timing of monitoring, we have three phases, one is the code submission phase, the other is the laboratory phase, and the other is the online control phase. In the code submission phase, we have a Lint rule, and the plug-in runs in the laboratory. When we submit the code, we may run some data and find that if there are some duplicate and useless UI nodes in the created file, we will list them, including some UI threads multiple submissions, IO operations, etc. When it is listed, the problem will be discovered in the operation stage. This is a level problem. Another is that after the submission is completed, we will package it. After the package is completed, we will do some tests with the previous version. It can be the previous version or the last submission, depending on your designation.

In this, we distinguish between first and non-first, because the startup speed of the first and non-first is quite different, and it is necessary to cross-execute. In this case, let the two packages be in the same environment as possible. After these two packages are run, there is a comparison data value behind. If this data value is less than 50, we can all think that it is not a problem, because the same package is started twice on the same phone, it may have a deviation of 50. As long as we find that we have taken multiple times, 30 times, 20 times or 50 times, compare the two packages and find that the difference is no problem if it is less than 50, and if it exceeds 50, you need to further click to check. For example, if we want to check and further subdivide the tasks, we will sort them in reverse order in this task. If we find which has problems, we will obviously locate the previous task and find the corresponding responsibility. People solve this problem.

UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?
Figure 14
UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?

Figure 15
UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?

Figure 16

Some people say that my problem is growth, but I don't know why. It may need to be further positioned, so we need to further analyze its code, for example, we are going to hit task6, or something. An AOP method is also used here to do this code *** problem, in order to prevent conflicts with online packages, because this kind of code is very scary when added to the line, and the performance is plummeting, so we can only In the laboratory stage, we have a package switch here to control that you must be a laboratory package to have the function of turning on this switch, including the TraceAspect we have just now, which is the problem in the previous table. You can output that thing as a stable file and put it in the local mobile phone directory. When the phone starts again, it automatically collects this directory, embeds the logic of task6 into our code, and then outputs what we do accordingly. This is to start related monitoring.

Performance monitoring-freeze after startup

The other is the problem of freezing after booting. Monitoring the freezing Android is a loop. In fact, UC also used loop a few years ago. This point may be ok for the card, but the granularity is not enough for the frame. Because every time it collects the loop, it needs to collect a stack, which consumes a lot of performance. Another point is that if its loop keeps hooking each message, there will be memory growth problems. We will only find problems at some large card points and ANR comparison card points, but for pause points, we Still use frame rate to measure.
UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?
Figure 17

How do we do the frame rate? There is another problem with the frame rate, that is, the frame rate must be when the interface is refreshed. If the interface is not refreshed, how do we get its frame rate? This is an API of the Android platform. We can get its frame rate through this API, but its API is only available on Android 4.1 or higher. At present, most of our users' models on the market are already 4.1 or higher. We can continue to add its frames by setting a variable in this callback, and then we set a time, and determine the frame rate according to this time and the number of frames. But there is also a problem here. This API only calls back once every time you post. You want to keep the user inactive, always have the frame rate, and if there is a vertical signal callback, you must add this code.

You may find this code, isn't this the first call? Indeed, this thing must pass the first call to be able to reach the frame rate callback when it calculates that the user is not operating. But this will cause our CPU to stay in sleep. How to solve this problem? The purpose of monitoring itself is to find problems, but it is also a performance monitoring. We cannot bring other performance problems due to performance monitoring. One of our principles is that we try to shorten the time we monitor the frame rate. Second, since we want to measure whether the user's operation card is stuck or not, then we can locate the problem when the user is operating, for example, when the user is operating, just monitor that the frame rate is high or not during that time period. Up.
UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?
Figure 18

How to get the frame rate during user operation? The principle of this in the Android platform is this, the entire user operation is actually divided into the first hand drop, slide, and finger lift. There is no end at this time. Because its interface will still have some animations, or some scrolling, similar to its scrolling behavior, this entire behavior is a complete operation of the user. That is to say, we have to get the user's hand point from the beginning until the entire interface does not move. In this process, we have to start. For the first user to manually click this thing, we get the Touch event through Hook Activity Window Callback dispatchTouchEvent. This Touch event, we know the process of his Drag and Up events, we know what his behavior is. If there is a dragging behavior among him, or he is just a click, we can all know.

Regarding the second point, what about the interface, is it still? At what point did we determine this point. We just talked about the callback of our Choreographer. No matter what, if there is a vertical signal, it will have a callback. For the following, we are registering an Activity DecorView ViewTreeObserver OnPreDrawListener, and its callback is only when the interface is drawn. In other words, when our interface is moving, in fact, the upper Choreographer and PreDrawListener correspond one to one. This means that when the two are not in a one-to-one correspondence, it indicates that the interface is not moving. In this way, we pinpoint the time point of the process, but at this stage we monitor the frame rate and improve our performance. The cost is minimized.

After we know it, we can pull out the user scene data we monitored during this process, including the SM below, which also means frame rate, and print it out. This log helps us locate which scenes the user is actually operating the card. It's not dragging and so on, this is the behavior we collect after monitoring.

Q&A

Question: Hello, I usually use the UC browser. I will encounter such a problem. When I start the browser, sometimes I don’t just operate it for the first time. It always loads some data. Refresh its page. The page has changed a lot at this time. How do you monitor the freeze at this time? I just saw that you use gestures to trigger the monitoring mechanism. The lagging monitoring mechanism is to monitor every frame of it. At this time, if the network obtains data, or the APP comes back from the background and then comes back in UC, the data is separated There will be a refresh in time. The huge data refresh will cause the UI interface to freeze. How do you monitor it?
Liu Cheng: One is user operation behavior, the other is ok, but we can artificially set the value of this time, just like you may have some business behavior at the time, it may automatically start our frame rate in the business. Switch, we can have two ways, one way is that we track all user gestures and user operations, and the other way is for us to bury points independently. For the former, we don’t need to bury the points automatically, but for the latter method, like you said, in a specific scenario or after a period of time after startup, we can manually call the API to bury the points. .

Question: I have another question. When the APP starts, there will be a task scheduling between multiple threads, and there will be a schedule. When scheduling, how do you detect whether a certain thread is busy or idle to schedule a task?
Liu Cheng: First of all, every thread has a loop, and there is an idle callback in the loop, you can set it. Normally, we will only detect the UI thread. The second point is that we actually don't care. We say that our thread detects whether it is busy or not. One of us is to wait for it to call back. The second is because the tasks are all controlled by us. We don't throw them inside. There is nothing to do inside.

Question: For example, if a task takes a long time to execute, which is equivalent to a big task, will you split it?
Liu Cheng: Yes, that’s the atomization step we just talked about. We try to keep each step below 200 milliseconds, because 200 milliseconds have already adjusted a lot of frames, so in order to ensure good scheduling, we will The task is split.

Question: Hello, I have a few questions here. The first is the scene where your PPT said there are 70,000. What is the more than 70,000 scene, can you tell me more about it?
Liu Cheng: Our task is divided into more than 20 tasks. This task is concurrent. The order of this task is not stipulated. If we have no dependencies at all, its startup sequence is just over 20. The permutation and combination of the tasks, that is, the factorial of 25, you can imagine how big a number it is. On this basis, we have added a dependency limit, so it will be reduced to about 70 to 100,000.

Question: Is this 70,000 not related to specific business?
Liu Cheng: It has nothing to do with the business. It's just that we do this. The process of starting may be started in this way. For example, 1, 2, 3, these three tasks, if we do not add dependencies, it may be 1, 2, 3 ; 3, 2, 1; 1, 3, 2; 2, 3, 1, etc. There are many such sequences. But for the twenty or so of us, its data will be a factorial increase in magnitude. We just forced to add some dependencies. For example, my 2 must be after 1 and 3 must be after 2, so as to restrict it. This is what it means to converge this thing.

Question: The second question, how does your market data count the time when your app was launched, on the client?
Liu Cheng: The client is like this. First of all, let's talk about the statistical method. Our statistics is to add a static variable to the APPLICATION class, which is initialized when the class is loaded, that is, the starting point is buried, and then after we start it. The point after startup, we define the next frame after the home page draw (View dispatchDraw) is called after startup. We will throw a frame to the bottom, and that frame is the end of the real start. When you interrupt at that moment, you will find that the home page is already displayed. We count this point in time, that is, every mobile phone, every time it starts, we will record this and upload the data to our server for analysis.

Question: There is a problem here. The user's perception is relatively strong. Typically, you still have to see your interface. You just can't count his clicks. It starts when you click the icon to show it to you.
Liu Cheng: Yes, there will be a problem with you. We click to start, and some of its responses may be at the system level. Because you have done it and you click on the desktop, what you are actually operating is the Native APP. The system will perform some operations on the Native before starting our process through various means. Usually we have visited that piece of time. I can specifically talk about this point here. Maybe you ordered it at this point, and then count at this point. What should I do about the difference between this point? Usually at this point on the system, we go through the adb Start command. There are three values ​​in the adb Start command just now. I don’t know if you don’t understand. Waittime refers to the time from this point to this point. You can see This time period, but usually this time period is relatively small, how much impact on our market data, is like this.

Question: The last question, what are your internal standard distinctions? Should my task be placed in the UI or in the background, because some tasks are also available in the UI and in the background.
Liu Cheng: This is currently pre-defined. There is a picture just now. In the description of our start task, we are pre-defined here. Do you mean that it can dynamically determine whether it is on the UI thread or Is it a non-UI thread?

Question: No, because when we define this task, there should be a standard or a normative thing. That is to say, when I define this task, I put it in the UI or in the background thread. This distinction is What's the standard?
Liu Cheng: Currently it is related to UI. If it is UI rendering, it must be on the UI thread. If it is only UI, it is not necessarily. We can initialize the UI in the background. Including we are all trying to do asynchronous rendering later. This rendering is done in the asynchronous thread, so in fact, any task can be in the asynchronous thread, but we are currently in order to avoid some unnecessary troubles, we force the UI rendering to be placed on the UI thread, other things can be done Put it in an asynchronous thread.

Recommended reading

  • How does a web server achieve high throughput and low latency? Dropbox optimization guide from operating system to application layer
  • Meitu HTTPS optimization exploration and practice
  • The secret of 8 times faster upload speed of mobile QQ: "shark fin" project to solve the speed and success rate
  • Tencent TMQ team mobile App network optimization: 24-hour traffic optimization to the original 15% course

The author of this article, Liu Cheng, please indicate the source, technical originality and architectural practice articles for reprinting. Welcome to submit articles through the official account menu "Contact Us".

Highly available architecture

Changing the way the internet is built

UC browser is fast-opening road: how to deal with the repeated problems of large-scale APP optimization work?

Guess you like

Origin blog.51cto.com/14977574/2546991