browser page loop

How does the browser page "move"

Each rendering process has a main thread , and the main thread is very busy, dealing with DOM, calculating styles, and layout, as well as JavaScript tasks and various input events

Event loop mechanism

In order to receive and execute new tasks during thread running, an event loop mechanism is required

image-20230408204731463

message queue

Using message queues, message communication between threads is realized.

image-20230408204812866

What are the types of tasks in the message queue. Refer to Chromium's official source code , which contains many internal message types, such as input events (mouse scrolling, clicking, moving), micro-tasks, file reading and writing, WebSocket, JavaScript timers, etc. In addition, the message queue also contains many page-related events, such as JavaScript execution, parsing DOM, style calculation, layout calculation, CSS animation, etc.

Disadvantages of using a single thread

  • How to handle high priority tasks

If the DOM changes, the synchronous notification method will affect the execution efficiency of the current task ; if the asynchronous method is used, it will affect the real-time monitoring .

Usually we call the tasks in the message queue macrotasks , and each macrotask contains a microtask queue . During the execution of the macrotask, if the DOM changes, the change will be added to the microtask list In this way, the continued execution of the macro task will not be affected, thus solving the problem of execution efficiency.

After the main functions in the macro task are directly completed, at this time, the rendering engine is not in a hurry to execute the next macro task, but executes the micro tasks in the current macro task, because the events of DOM changes are stored in these micro task queues In this way, the real-time problem is solved.

  • How to solve the problem that the execution time of a single task is too long

Because all tasks are executed in a single thread, only one task can be executed at a time, while other tasks are in a waiting state. If one of the tasks takes too long to execute, the next task will have to wait for a long time

image-20230408205602915

In response to this situation, JavaScript can avoid this problem through the callback function, that is, to delay the execution of the JavaScript task to be executed

How setTimeout is implemented

The setTimeout method is a timer, which is used to specify how many milliseconds after a certain function is executed . It will return an integer, indicating the number of the timer, and the timer can be canceled through this number.

function showName(){
    
    
  console.log(" 极客时间 ")
}
var timerID = setTimeout(showName,200);

All tasks running on the main thread in the rendering process need to be added to the message queue first, and then the event loop system executes the tasks in the message queue in order. Those typical events:

  • When receiving HTML document data, the rendering engine will add the "parse DOM" event to the message queue,
  • When the user changes the window size of the Web page, the rendering engine will add a "re-layout" event to the message queue.
  • When the JavaScript engine garbage collection mechanism is triggered, the rendering engine will add the "garbage collection" task to the message queue.
  • Similarly, if you want to execute a piece of asynchronous JavaScript code, you also need to add execution tasks to the message queue.

The above list is just a small part of events. After these events are added to the message queue, the event loop system will execute the events in the order in the message queue.

So to execute an asynchronous task, you need to add the task to the message queue first. However, setting the callback function through the timer is a bit special. They need to be called within the specified time interval, but the tasks in the message queue are executed in order, so in order to ensure that the callback function can be executed within the specified time, you cannot set the timer The callback function is directly added to the message queue.

In addition to the normally used message queue in Chrome, there is another message queue, which maintains a list of tasks that need to be delayed, including timers and some tasks that need to be delayed within Chromium. So when a timer is created by JavaScript, the render process will add the timer's callback task to the delay queue. You can refer to the source code of the queue section in Chromium .

When calling setTimeout to set the callback function through JavaScript, the rendering process will create a callback task, including the callback function showName, the current launch time, and the delayed execution time. After creating the callback task, add the task to the delayed execution queue

The code of the message loop, adding the code to execute the delay queue, is as follows:

void ProcessTimerTask(){
    
    
  // 从 delayed_incoming_queue 中取出已经到期的定时器任务
  // 依次执行这些任务
}
 
TaskQueue task_queue;
void ProcessTask();
bool keep_running = true;
void MainTherad(){
    
    
  for(;;){
    
    
    // 执行消息队列中的任务
    Task task = task_queue.takeTask();
    ProcessTask(task);
    
    // 执行延迟队列中的任务
    ProcessDelayTask()
 
    if(!keep_running) // 如果设置了退出标志,那么直接退出线程循环
        break; 
  }
}

In the above code, after processing a task in the message queue, the ProcessDelayTask function is executed. The ProcessDelayTask function will calculate the due tasks according to the initiation time and delay time, and then execute these due tasks in sequence. After the due task is completed, continue to the next cycle process. In this way, a complete timer is realized.

Some notes on using setTimeout

If the current task execution time is too long, it will affect the execution of the delayed timer task

When using setTimeout, there are many factors that cause the execution of the callback function to take longer than the set expected value, one of which is that the execution time of the current task is too long, which causes the task set by the timer to be delayed.

function bar() {
    
    
    console.log('bar')
}
function foo() {
    
    
    setTimeout(bar, 0);
    for (let i = 0; i < 5000; i++) {
    
    
        let i = 5+8+8+8
        console.log(i)
    }
}
foo()

When executing the foo function, use setTimeout to set a 0-delay callback task. After setting the callback task, the foo function will continue to execute the for loop 5000 times.

The callback task set by setTimeout is put into the message queue and waits for the next execution, which is not executed immediately; to execute the next task in the message queue, you need to wait for the current task to complete, because the current code needs to The for loop is executed 5000 times, so the execution time of the current task will be longer. This will inevitably affect the execution time of the next task.

image-20230408211257886

It takes 500 milliseconds to execute the foo function, which means that the task set by setTimeout will be delayed until 500 milliseconds later, and the callback delay time of setTimeout is 0.

If there are nested calls to setTimeout, the system will set the minimum time interval to 4 milliseconds

function cb() {
    
     setTimeout(cb, 0); }
setTimeout(cb, 0);

You can use Performance to record the execution process of this code, as shown in the following figure:

image-20230408211458493

The vertical line is the function callback process of the timer. It can be seen from the figure that the time interval of the first five calls is relatively small, and the nested call is more than five times, and the minimum time interval of each subsequent call is 4 milliseconds. The reason for this is that in Chrome, if the timer is nested and called more than 5 times, the system will judge that the function method is blocked. If the interval between calling the timer is less than 4 milliseconds, the browser will The interval between calls is set to 4 milliseconds. Chromium's code for 4ms latency

Therefore, some high real-time requirements are not suitable for using setTimeout. For example, it is not a good idea to use setTimeout to implement JavaScript animation.

Inactive pages, the minimum interval for setTimeout execution is 1000 milliseconds

The minimum value of the timer in the page that is not activated is greater than 1000 milliseconds, that is, if the label is not the current active label, the minimum time interval of the timer is 1000 milliseconds, the purpose is to optimize the loading loss of background pages and reduce power consumption quantity.

Delayed execution time has a maximum value

Chrome, Safari, and Firefox all use 32 bits to store the delay value, and the maximum number that can only be stored in 32bit is 2147483647 milliseconds, which means that if the delay value set by setTimeout is greater than 2147483647 milliseconds (about 24.8 days) will overflow, which causes the timer to be executed immediately

The this in the callback function set with setTimeout is not intuitive

If the callback function postponed by setTimeout is a method of an object, then the this keyword in the method will point to the global environment, not the object where it was defined

var name= 1;
var MyObj = {
    
    
  name: 2,
  showName: function(){
    
    
    console.log(this.name);
  }
}
setTimeout(MyObj.showName,1000)
//输出的是 1,因为这段代码在编译的时候,执行上下文中的 this 会被设置为全局 window,如果是严格模式,会被设置为 undefined。

//解决这个问题
//第一种是将MyObj.showName放在匿名函数中执行
// 箭头函数
setTimeout(() => {
    
    
    MyObj.showName()
}, 1000);
// 或者 function 函数
setTimeout(function() {
    
    
  MyObj.showName();
}, 1000)

//使用 bind 方法,将 showName 绑定在 MyObj
setTimeout(MyObj.showName.bind(MyObj), 1000)

How XMLHttpRequest is implemented

XMLHttpRequest provides the ability to obtain data from the web server. If you want to update a piece of data, you only need to request the interface provided by the server through XMLHttpRequest to obtain the data from the server, and then operate the DOM to update the page content. The whole process only needs to You only need to update a part of the webpage, instead of refreshing the entire page as before, which is efficient and does not disturb the user.

Callback

let callback = function(){
    
    
    console.log('i am do homework')
}
function doWork(cb) {
    
    
    console.log('start do work')
    cb()
    console.log('end do work')
}
doWork(callback)

Assign an anonymous function to the variable callback, and pass the callback as a parameter to the doWork() function. At this time, the callback is the callback function in the function doWork().

The above callback method has a feature, that is, the callback function callback is executed before the main function doWork returns, and this callback process is called synchronous callback .

let callback = function(){
    
    
    console.log('i am do homework')
}
function doWork(cb) {
    
    
    console.log('start do work')
    setTimeout(cb,1000)   
    console.log('end do work')
}
doWork(callback)

The setTimeout function is used to delay the execution of the callback for 1 second after the execution of the doWork function. This time the callback is not called inside the main function doWork. The process of executing this callback function outside the main function is called asynchronous callback .

The browser page is driven by the event loop mechanism. Each rendering process has a message queue. The main thread of the page executes the events in the message queue in order, such as executing JavaScript events, parsing DOM events, calculating layout events, and user input. Events, etc., if a new event is generated on the page, the new event will be appended to the end of the event queue

When the circulatory system is executing a task, it must maintain a system call stack for this task . This system call stack is similar to JavaScript's call stack, except that the system call stack is maintained by Chromium's development language C++. You can grab the complete call stack information through chrome://tracing/. Of course, you can also capture its core call information through Performance, as shown in the following figure:

image-20230408212706866

This picture records a Parse HTML task execution process, where the yellow entries represent the execution process of JavaScript, and the entries of other colors represent the execution process of the browser's internal system.

The Parse HTML task will encounter a series of sub-processes during the execution process. For example, when a JavaScript script is encountered in the process of parsing the page, then the parsing process is suspended to execute the script, and the parsing process is resumed after the execution is completed. Then I encountered a style sheet, and then started parsing the style sheet again... until the entire task was completed.

It should be noted that the entire Parse HTML is a complete task. The script parsing and style sheet parsing during execution are sub-processes of this task, and the long bar that is pulled down is the information of the call stack during execution.

Each task has its own call stack during execution, so the synchronous callback is to execute the callback function in the context of the current main function. There is not much to say about this. Let’s mainly look at the asynchronous callback process. Asynchronous callback means that the callback function is executed outside the main function. Generally, there are two ways:

  • The first is to make the asynchronous function a task and add it to the end of the message queue;
  • The second is to add the asynchronous function to the microtask queue, so that the microtask can be executed at the end of the current task.

How XMLHttpRequest works

image-20230408213028873

function GetWebData(URL){
    
    
    /**
     * 1: 新建 XMLHttpRequest 请求对象
     */
    let xhr = new XMLHttpRequest()
 
    /**
     * 2: 注册相关事件回调处理函数 
     */
    xhr.onreadystatechange = function () {
    
    
        switch(xhr.readyState){
    
    
          case 0: // 请求未初始化
            console.log(" 请求未初始化 ")
            break;
          case 1://OPENED
            console.log("OPENED")
            break;
          case 2://HEADERS_RECEIVED
            console.log("HEADERS_RECEIVED")
            break;
          case 3://LOADING  
            console.log("LOADING")
            break;
          case 4://DONE
            if(this.status == 200||this.status == 304){
    
    
                console.log(this.responseText);
                }
            console.log("DONE")
            break;
        }
    }
 
    xhr.ontimeout = function(e) {
    
     console.log('ontimeout') }
    xhr.onerror = function(e) {
    
     console.log('onerror') }
 
    /**
     * 3: 打开请求
     */
    xhr.open('Get', URL, true);// 创建一个 Get 请求, 采用异步
 
 
    /**
     * 4: 配置参数
     */
    xhr.timeout = 3000 // 设置 xhr 请求的超时时间
    xhr.responseType = "text" // 设置响应返回的数据格式
    xhr.setRequestHeader("X_TEST","time.geekbang")
 
    /**
     * 5: 发送请求
     */
    xhr.send();
}
  • Step 1: Create an XMLHttpRequest object.

    • When executed let xhr = new XMLHttpRequest(), JavaScript will create an XMLHttpRequest object xhr to perform actual network request operations.
  • Step 2: Register the callback function for the xhr object.

    • Because the network request is time-consuming, it is necessary to register a callback function, so that after the background task is executed, it will call the callback function to tell its execution result.

      The callback functions of XMLHttpRequest mainly include the following types:

      • ontimeout, used to monitor timeout requests, if the background request times out, this function will be called;
      • onerror, used to monitor error information, if the background request fails, this function will be called;
      • onreadystatechange, which is used to monitor the status of the background request process, such as the message that the HTTP header is loaded, the HTTP response body message, and the message that the data is loaded.
  • Step 3: Configure basic request information.

    • After registering the callback event, the next step is to configure the basic request information. First, configure some basic request information through the open interface, including the address of the request, the request method (get or post) and the request method (synchronous or asynchronous request ).

      Then configure some other optional request information through the xhr internal attribute class. You can refer to the sample code in the article. We configure the timeout period by xhr.timeout = 3000configuring the timeout period. That is to say, if the request exceeds 3000 milliseconds and there is no response, the request will be judged as failed. .

      You can also xhr.responseType = "text"configure the format returned by the server to automatically convert the data returned by the server to the format you want. If the value of responseType is set to json, the system will automatically convert the data returned by the server to JavaScript object format. The diagram below is a description of some of the return types I listed:

      You need to add your own special request header attributes, which can be added through xhr.setRequestHeader

      image-20230408213413826

  • Step 4: Initiate a request.

    • After everything is ready, it can be called xhr.sendto initiate a network request. The rendering process will send the request to the network process, and then the network process is responsible for downloading the resources. After the network process receives the data, it will use IPC to notify the rendering process; after the rendering process receives the message, it will encapsulate the callback function of xhr into The task is added to the message queue, and when the main thread loop system executes the task, it will call the corresponding callback function according to the relevant state.

      • If there is an error in the network request, xhr.onerror will be executed;
      • If it times out, xhr.ontimeout will be executed;
      • If it is normal data reception, onreadystatechange will be executed to feedback the corresponding state.

      This is a complete XMLHttpRequest request process. If you are interested, you can refer to Chromium's implementation of XMLHttpRequest. Click here to view the code .

The "pit" in the process of using XMLHttpRequest

cross-domain issues

By default, cross-origin requests are not allowed

var xhr = new XMLHttpRequest()
var url = 'https://time.geekbang.org/'
function handler() {
    
    
    switch(xhr.readyState){
    
    
        case 0: // 请求未初始化
        console.log(" 请求未初始化 ")
        break;
        case 1://OPENED
        console.log("OPENED")
        break;
        case 2://HEADERS_RECEIVED
        console.log("HEADERS_RECEIVED")
        break;
        case 3://LOADING  
        console.log("LOADING")
        break;
        case 4://DONE
        if(this.status == 200||this.status == 304){
    
    
            console.log(this.responseText);
            }
        console.log("DONE")
        break;
    }
}
   
function callOtherDomain() {
    
    
  if(xhr) {
    
        
    xhr.open('GET', url, true)
    xhr.onreadystatechange = handler
    xhr.send();
  }
}
callOtherDomain()

First open www.geekbang.org through the browser , then open the console, enter the above sample code in the console, and then execute it, you will see that the request is blocked. The console prompt information is as follows:

Access to XMLHttpRequest at 'https://time.geekbang.org/' from origin 'https://www.geekbang.org' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.

Because www.geekbang.org and time.geekbang.com do not belong to the same domain, the above access is a cross-domain access, and this access failure is caused by cross-domain problems.

Problems with HTTPS mixed content

HTTPS mixed content means that HTTPS pages contain content that does not meet HTTPS security requirements, such as HTTP resources, images, videos, style sheets, scripts, etc. loaded through HTTP, all of which are mixed content.

Usually, if mixed content is used in an HTTPS request page, the browser will display a warning for HTTPS mixed content, which is used to indicate to the user that this HTTPS page contains unsafe resources. For example, open the site https://www.iteye.com/groups, you can see the mixed content warning through the console, refer to the following figure:

image-20230408213919515

As can be seen from the figure above, although a warning is given for mixed resources loaded through HTML files, most types can still be loaded. When using the XMLHttpRequest request, the browser thinks that this request may be initiated by an attacker, and will block such dangerous requests. For example, open the address https://www.iteye.com/groups through the browser, and then use XMLHttpRequest to request http://img-ads.csdn.net/2018/201811150919211586.jpg through the console, then the request will be An error is reported, and the error message is shown in the figure below:

image-20230408213958451

Macrotasks and Microtasks

Microtask-based technologies include MutationObserver, Promise, and many other technologies developed based on Promise.

macro task

Most of the tasks in the page are performed on the main thread, these tasks include:

  • Rendering events (such as parsing DOM, calculating layout, drawing);
  • User interaction events (such as mouse clicks, scrolling pages, zooming in and out, etc.);
  • JavaScript script execution events;
  • Network request completion, file read and write completion events.

In order to coordinate the execution of these tasks on the main thread in an orderly manner, the page process introduces a message queue and event loop mechanism. Multiple message queues are maintained inside the rendering process, such as delayed execution queues and ordinary message queues. Then the main thread uses a for loop to continuously take out tasks from these task queues and execute them. The tasks in these message queues are called macro tasks .

Macro tasks can meet most of the daily needs, but if there is a demand for high time precision, macro tasks are difficult to do. Let’s analyze why macro tasks are difficult to meet tasks that require high time precision .

Page rendering events, various IO completion events, JavaScript script execution events, user interaction events, etc. may be added to the message queue at any time, and adding events is operated by the system, and JavaScript code cannot accurately control task requirements. The position added to the queue cannot control the position of the task in the message queue, so it is difficult to control the time to start executing the task.

<!DOCTYPE html>
<html>
    <body>
        <div id='demo'>
            <ol>
                <li>test</li>
            </ol>
        </div>
    </body>
    <script type="text/javascript">
        function timerCallback2(){
    
    
          console.log(2)
        }
        function timerCallback(){
    
    
            console.log(1)
            setTimeout(timerCallback2,0)
        }
        setTimeout(timerCallback,0)
    </script>
</html>

In this code, I want to set two callback tasks through setTimeout, and let them execute in sequence, and do not insert other tasks in the middle, because if other tasks are inserted in the middle of these two tasks, it will be It is likely to affect the execution time of the second timer.

But the actual situation is uncontrollable. For example, when calling setTimeout to set the callback task gap, many system-level tasks may be inserted into the message queue. You can open the Performance tool to record the execution process of this task:

image-20230408214836557

The callback functions triggered by the setTimeout function are all macro tasks. In the figure, the two yellow blocks on the left and right are the two timer tasks triggered by setTimeout.

In the light red area in the middle of the picture, there are a lot of tasks one by one. These are tasks inserted between two timer tasks by the rendering engine. Just imagine, if the execution time of the inserted task is too long, it will affect the execution of subsequent tasks.

Therefore, the time granularity of macro tasks is relatively large, and the execution time interval cannot be precisely controlled. It does not meet some high real-time requirements, such as the need to monitor DOM changes.

micro task

A microtask is a function that needs to be executed asynchronously. The timing of execution is after the execution of the main function and before the end of the current macrotask.

There are two main ways of asynchronous callback:

The first is to encapsulate the asynchronous callback function into a macro task, add it to the end of the message queue, and execute the callback function when the loop system executes the task . The callback functions of setTimeout and XMLHttpRequest are implemented in this way

The execution timing of the second method is to execute the callback function after the execution of the main function ends and before the end of the current macro task, which is usually embodied in the form of micro tasks.

When JavaScript executes a script, V8 will create a global execution context for it. While creating the global execution context, the V8 engine will also create a microtask queue internally . As the name implies, this microtask queue is used to store microtasks, because sometimes multiple microtasks are generated during the execution of the current macrotask, and this microtask queue needs to be used to save these microtasks. However, this microtask queue is used internally by the V8 engine, so it cannot be directly accessed through JavaScript.

That is to say, each macro task is associated with a micro task queue . Then, next, analyze two important time points-the timing of microtask generation and the timing of executing the microtask queue.

  • There are two ways to generate microtasks:

The first way is to use MutationObserver to monitor a certain DOM node, and then modify the node through JavaScript, or add or delete some sub-nodes for this node. When the DOM node changes, a microtask of DOM change record will be generated.

The second way is to use Promise. When Promise.resolve() or Promise.reject() is called, microtasks will also be generated.

  • When the microtask queue is executed:

Normally, when the JavaScript in the current macro task is about to be executed, that is, when the JavaScript engine is about to exit the global execution context and clear the call stack, the JavaScript engine will check the microtask queue in the global execution context, and then execute them in order Microtasks in the queue. The WHATWG refers to the point in time when a microtask is executed as a checkpoint . Of course, in addition to the checkpoint of exiting the global execution context, there are other checkpoints, but they are not too important

If a new microtask is generated during the execution of the microtask, the microtask will also be added to the microtask queue, and the V8 engine will continue to execute the tasks in the microtask queue in a loop until the queue is empty. That is to say, the new microtask generated during the execution of the microtask will not be postponed to the next macrotask, but will continue to be executed in the current macrotask.

image-20230408215838740

The diagram is to execute a ParseHTML macro task. During the execution, if a JavaScript script is encountered, the parsing process will be suspended and the JavaScript execution environment will be entered. As can be seen from the figure, the global context contains a list of microtasks.

During the subsequent execution of the JavaScript script, two microtasks are created through Promise and removeChild respectively, and added to the microtask list. Then the execution of JavaScript is finished, and the global execution context is about to exit. At this time, the checkpoint is reached. The JavaScript engine will check the microtask list and find that there are microtasks in the microtask list, and then execute these two microtasks in sequence. After the microtask queue is emptied, exit the global execution context.

From the above analysis we can draw the following conclusions :

  • Microtasks and macrotasks are bound, and each macrotask creates its own microtask queue when executed.
  • The execution duration of the microtask will affect the duration of the current macrotask. For example, during the execution of a macro task, 100 micro tasks are generated, and the time to execute each micro task is 10 milliseconds, then the time to execute these 100 micro tasks is 1000 milliseconds. It can also be said that these 100 micro tasks make the macro The execution time of the task is extended by 1000 milliseconds. So when you write code, you must pay attention to controlling the execution time of microtasks.
  • In a macrotask, create a macrotask and a microtask for callback respectively. In any case, the microtask is executed earlier than the macrotask.

Monitor DOM change method evolution

MutationObserver is a set of methods used to monitor DOM changes, and monitoring DOM changes has always been a very core requirement of front-end engineers.

Many web applications utilize HTML and JavaScript to build their custom controls, and unlike some built-in controls, these controls are not inherent. In order to work well with built-in controls, these controls must be able to adapt to content changes, respond to events and user interaction. Therefore, web applications need to monitor DOM changes and respond in a timely manner .

The early pages did not provide support for monitoring, so the only way to observe whether the DOM changed at that time was to poll for detection, such as using setTimeout or setInterval to regularly detect whether the DOM has changed. This method is simple and rude, but it will encounter two problems: if the time interval is set too long, the response to DOM changes will not be timely enough; conversely, if the time interval is set too short, it will waste a lot of useless work to check the DOM, which will make Pages become inefficient.

Until the introduction of Mutation Event in 2000, Mutation Event adopts the observer design pattern , and when the DOM changes, the corresponding event will be triggered immediately. This method is a synchronous callback.

Using Mutation Event solves the real-time problem, because once the DOM changes, the JavaScript interface will be called immediately. But it is this real-time nature that causes serious performance problems, because every time the DOM changes, the rendering engine will call JavaScript, which will generate a large performance overhead. For example, using JavaScript to dynamically create or dynamically modify the content of 50 nodes will trigger 50 callbacks, and each callback function requires a certain execution time. Here we assume that the execution time of each callback is 4 milliseconds, then the 50 callbacks The execution time is 200 milliseconds. If the browser is executing an animation effect at this time, the animation will be stuck because the Mutation Event triggers a callback event.

It is precisely because the use of Mutation Event will cause page performance problems, so Mutation Event was opposed to use, and gradually removed from the Web standard events.

In order to solve the performance problem of Mutation Event caused by calling JavaScript synchronously, starting from DOM4, it is recommended to use MutationObserver instead of Mutation Event. The MutationObserver API can be used to monitor changes in DOM, including changes in attributes, changes in nodes, changes in content, and so on.

MutationObserver changes the response function into an asynchronous call. Instead of triggering an asynchronous call every time the DOM changes, it will trigger an asynchronous call after multiple DOM changes, and also use a data structure to record all DOM changes during this period. . In this way, even if the DOM is manipulated frequently, it will not have a great impact on performance.

The performance problem is alleviated by asynchronous calls and reducing the number of triggers, so how to keep the timeliness of message notifications? If you use setTimeout to create a macro task to trigger the callback, then the real-time performance will be greatly reduced, because the above analysis, between the two tasks, other events may be inserted by the rendering process, thus affecting the real-time performance of the response.

At this time, microtasks can be played. Every time a DOM node changes, the rendering engine encapsulates the change record into a microtask, and adds the microtask to the current microtask queue. In this way, when the checkpoint is executed, the V8 engine will execute the microtasks in order.

To sum up, MutationObserver adopts the " asynchronous + micro task " strategy.

  • Solve the performance problem of synchronous operation through asynchronous operation ;
  • The real-time problem is solved through micro-tasks .

Promise

Promise solves the problem of asynchronous coding style

image-20230408221107560

The main thread of the page initiates a time-consuming task and hands the task to another process for processing. At this time, the main thread of the page will continue to execute the tasks in the message queue. After the process finishes processing the task, the task will be added to the message queue of the rendering process, and queued for processing by the circulatory system. After the queuing is over, the circulation system will take out the tasks in the message queue for processing, and trigger related callback operations.

This is a major feature of page programming: asynchronous callbacks . The single-threaded architecture of the web page determines asynchronous callbacks, and asynchronous callbacks affect our coding methods

Suppose there is a download requirement, use XMLHttpRequest to achieve

// 执行状态
function onResolve(response){
    
    console.log(response) }
function onReject(error){
    
    console.log(error) }
 
let xhr = new XMLHttpRequest()
xhr.ontimeout = function(e) {
    
     onReject(e)}
xhr.onerror = function(e) {
    
     onReject(e) }
xhr.onreadystatechange = function () {
    
     onResolve(xhr.response) }
 
// 设置请求类型,请求 URL,是否同步信息
let URL = 'https://time.geekbang.com'
xhr.open('Get', URL, true);
 
// 设置参数
xhr.timeout = 3000 // 设置 xhr 请求的超时时间
xhr.responseType = "text" // 设置响应返回的数据格式
xhr.setRequestHeader("X_TEST","time.geekbang")
 
// 发出请求
xhr.send();

There are five callbacks in a piece of code. So many callbacks will cause the logic of the code to be incoherent and non-linear, which is very inconsistent with human intuition. This is how asynchronous callbacks affect our coding methods.

Since the focus is on input content (request information) and output content (reply information) , as for the asynchronous request process in the middle, I don't want to reflect too much in the code, because it will interfere with the core code logic. The overall idea is shown in the figure below

image-20230408221412687

Modify the code according to this idea:

//makeRequest 用来构造 request 对象
//把输入的 HTTP 请求信息全部保存到一个 request 的结构中,包括请求地址、请求头、请求方式、引用地址、同步请求还是异步请求、安全设置等信息
function makeRequest(request_url) {
    
    
    let request = {
    
    
        method: 'Get',
        url: request_url,
        headers: '',
        body: '',
        credentials: false,
        sync: true,
        responseType: 'text',
        referrer: ''
    }
    return request
}

//将所有的请求细节封装进 XFetch 函数
//[in] request,请求信息,请求头,延时值,返回类型等
//[out] resolve, 执行成功,回调该函数
//[out] reject  执行失败,回调该函数
function XFetch(request, resolve, reject) {
    
    
    let xhr = new XMLHttpRequest()
    xhr.ontimeout = function (e) {
    
     reject(e) }
    xhr.onerror = function (e) {
    
     reject(e) }
    xhr.onreadystatechange = function () {
    
    
        if (xhr.status = 200)
            resolve(xhr.response)
    }
    xhr.open(request.method, URL, request.sync);
    xhr.timeout = request.timeout;
    xhr.responseType = request.responseType;
    // 补充其他请求信息
    //...
    xhr.send();
}
//这个 XFetch 函数需要一个 request 作为输入,然后还需要两个回调函数 resolve 和 reject,当请求成功时回调 resolve 函数,当请求出现问题时回调 reject 函数
//实现业务代码
XFetch(makeRequest('https://time.geekbang.org'),
    function resolve(data) {
    
    
        console.log(data)
    }, function reject(e) {
    
    
        console.log(e)
    })

The above sample code is more in line with linear thinking, and it works well in some simple scenarios. However, once you touch a slightly more complicated project, if you nest too many callback functions, you will easily fall into the callback hell

//先请求time.geekbang.org/?category,如果请求成功的话,那么再请求time.geekbang.org/column,如果再次请求成功的话,就继续请求time.geekbang.org
XFetch(makeRequest('https://time.geekbang.org/?category'),
      function resolve(response) {
    
    
          console.log(response)
          XFetch(makeRequest('https://time.geekbang.org/column'),
              function resolve(response) {
    
    
                  console.log(response)
                  XFetch(makeRequest('https://time.geekbang.org')
                      function resolve(response) {
    
    
                          console.log(response)
                      }, function reject(e) {
    
    
                          console.log(e)
                      })
              }, function reject(e) {
    
    
                  console.log(e)
              })
      }, function reject(e) {
    
    
          console.log(e)
      })

The reason why this code looks messy comes down to two reasons:

  • The first is nested calls . The following tasks depend on the request result of the previous task, and execute new business logic inside the callback function of the previous task , so that when there are more nesting levels, the readability of the code becomes Very bad.
  • The second is the uncertainty of the task . Each task has two possible results (success or failure), so it is reflected in the code that it is necessary to make two judgments on the execution result of each task. The task has to perform an additional error handling method, which obviously increases the confusion of the code.

Solution to the problem:

  • The first is to eliminate nested calls ;
  • The second is to combine error handling for multiple tasks .

Promise: eliminate nested calls and multiple error handling

//使用 Promise 来重构 XFetch 的代码
function XFetch(request) {
    
    
  function executor(resolve, reject) {
    
    
      let xhr = new XMLHttpRequest()
      xhr.open('GET', request.url, true)
      xhr.ontimeout = function (e) {
    
     reject(e) }
      xhr.onerror = function (e) {
    
     reject(e) }
      xhr.onreadystatechange = function () {
    
    
          if (this.readyState === 4) {
    
    
              if (this.status === 200) {
    
    
                  resolve(this.responseText, this)
              } else {
    
    
                  let error = {
    
    
                      code: this.status,
                      response: this.response
                  }
                  reject(error, this)
              }
          }
      }
      xhr.send()
  }
  return new Promise(executor)
}
//再利用 XFetch 来构造请求流程
var x1 = XFetch(makeRequest('https://time.geekbang.org/?category'))
var x2 = x1.then(value => {
    
    
    console.log(value)
    return XFetch(makeRequest('https://www.geekbang.org/column'))
})
var x3 = x2.then(value => {
    
    
    console.log(value)
    return XFetch(makeRequest('https://time.geekbang.org'))
})
x3.catch(error => {
    
    
    console.log(error)
})

Observe the above two pieces of code, focusing on the use of Promise.

  • First, Promise is introduced, and when XFetch is called, a Promise object will be returned.
  • When constructing a Promise object, an executor function needs to be passed in , and the main business processes of XFetch are executed in the executor function.
  • If the execution of the business running in the executor function is successful, the resolve function will be called; if the execution fails, the reject function will be called.
  • When the resolve function is called in the excutor function, the callback function set by promise.then will be triggered; when the reject function is called, the callback function set by promise.catch will be triggered.

Promise implements delayed binding of callback functions . The delayed binding of the callback function is reflected in the code by first creating the Promise object x1, and executing the business logic through the Promise constructor executor; after creating the Promise object x1, use x1.then to set the callback function.

// 创建 Promise 对象 x1,并在 executor 函数中执行业务逻辑
function executor(resolve, reject){
    
    
    resolve(100)
}
let x1 = new Promise(executor)
 
 
//x1 延迟绑定回调函数 onResolve
function onResolve(value){
    
    
    console.log(value)
}
x1.then(onResolve)

Penetrate the return value of the callback function onResolve to the outermost layer . Because we will decide what type of Promise task to create based on the incoming value of the onResolve function, the created Promise object needs to be returned to the outermost layer, so that we can get rid of the nested loop.

image-20230408223200384

Promise solves loop nesting through callback function delay binding and callback function return value penetration technology

function executor(resolve, reject) {
    
    
    let rand = Math.random();
    console.log(1)
    console.log(rand)
    if (rand > 0.5)
        resolve()
    else
        reject()
}
var p0 = new Promise(executor);
 
var p1 = p0.then((value) => {
    
    
    console.log("succeed-1")
    return new Promise(executor)
})
 
var p3 = p1.then((value) => {
    
    
    console.log("succeed-2")
    return new Promise(executor)
})
 
var p4 = p3.then((value) => {
    
    
    console.log("succeed-3")
    return new Promise(executor)
})
 
p4.catch((error) => {
    
    
    console.log("error")
})
console.log(2)

This code has four Promise objects: p0~p4. No matter which object throws an exception, the last object p4.catch can be used to catch the exception. In this way, the errors of all Promise objects can be combined into one function for processing, which solves the problem that each task needs to be handled separately. Unusual question.

The reason why you can use the last object to catch all exceptions is because the error of the Promise object has a "bubbling" nature and will be passed backward until it is processed by the onReject function or caught by the catch statement. With such a "bubbling" feature, there is no need to catch exceptions separately in each Promise object.

Promises and microtasks

function executor(resolve, reject) {
    
    
    resolve(100)
}
let demo = new Promise(executor)
 
function onResolve(value){
    
    
    console.log(value)
}
demo.then(onResolve)

When the new Promise is executed first, the Promise constructor will be executed, but since Promise is provided by the V8 engine, the details of the Promise constructor cannot be seen for the time being.

Next, the Promise constructor calls the Promise parameter executor function. Then resolve is executed in the executor, and the resolve function is also implemented inside V8. Executing the resolve function will trigger the callback function onResolve set by demo.then, so it can be speculated that the resolve function internally calls the onResolve function set by demo.then.

Since Promise adopts the callback function delayed binding technology, when the resolve function is executed, the callback function has not been bound, so the execution of the callback function can only be postponed.

Let's simulate the implementation of a Promise, which will implement its constructor, resolve method and then method

//Bromise 实现了构造函数、resolve、then 方法
function Bromise(executor) {
    
    
    var onResolve_ = null
    var onReject_ = null
     // 模拟实现 resolve 和 then,暂不支持 rejcet
    this.then = function (onResolve, onReject) {
    
    
        onResolve_ = onResolve
    };
    function resolve(value) {
    
    
        //让 resolve 中的 onResolve_ 函数延后执行
          //setTimeout(()=>{
    
    
            onResolve_(value)
           // },0)
    }
    executor(resolve, null);
}


//使用 Bromise 来实现业务代码
function executor(resolve, reject) {
    
    
    resolve(100)
}
// 将 Promise 改成我们自己的 Bromsie
let demo = new Bromise(executor)
 
function onResolve(value){
    
    
    console.log(value)
}
demo.then(onResolve)
// 执行这段代码,发现执行出错,输出的内容是:
//Uncaught TypeError: onResolve_ is not a function
 //   at resolve (<anonymous>:10:13)
   // at executor (<anonymous>:17:5)
    //at new Bromise (<anonymous>:13:5)
    //at <anonymous>:19:12
//之所以出现这个错误,是由于 Bromise 的延迟绑定导致的,在调用到 onResolve_ 函数的时候,Bromise.then 还没有执行,所以执行上述代码的时候,当然会报“onResolve_ is not a function“的错误了。

The above uses a timer to delay the execution of onResolve, but the efficiency of using a timer is not too high, so Promise transforms this timer into a microtask, so that onResolve_ can be called with a delay, and the code is improved execution efficiency. This is why microtasks are used in Promises.

async/await

fetch('https://www.geekbang.org')
      .then((response) => {
    
    
          console.log(response)
          return fetch('https://www.geekbang.org/test')
      }).then((response) => {
    
    
          console.log(response)
      }).catch((error) => {
    
    
          console.log(error)
      })

//ES7 引入了 async/await,这是 JavaScript 异步编程的一个重大改进,提供了在不阻塞主线程的情况下使用同步代码实现异步访问资源的能力,并且使得代码逻辑更加清晰

async function foo(){
    
    
  try{
    
    
    let response1 = await fetch('https://www.geekbang.org')
    console.log('response1')
    console.log(response1)
    let response2 = await fetch('https://www.geekbang.org/test')
    console.log('response2')
    console.log(response2)
  }catch(err) {
    
    
       console.error(err)
  }
}
foo()

How does Generator work, and the underlying implementation mechanism of Generator - Coroutine; because async/await uses Generator and Promise two technologies, so through Generator and Promise to analyze how async/await uses Synchronous way to write asynchronous code.

Generators vs Coroutines

A generator function is an asterisked function, and execution can be paused and resumed .

function* genDemo() {
    
    
    console.log(" 开始执行第一段 ")
    yield 'generator 2'
 
    console.log(" 开始执行第二段 ")
    yield 'generator 2'
 
    console.log(" 开始执行第三段 ")
    yield 'generator 2'
 
    console.log(" 执行结束 ")
    return 'generator 2'
}
 
console.log('main 0')
let gen = genDemo()
console.log(gen.next().value)
console.log('main 1')
console.log(gen.next().value)
console.log('main 2')
console.log(gen.next().value)
console.log('main 3')
console.log(gen.next().value)
console.log('main 4')

Execute the above code and observe the output results, you will find that the function genDemo is not executed at one time, the global code and the genDemo function are executed alternately. In fact, this is the characteristic of generator functions, which can suspend execution and resume execution. Let's take a look at the specific usage of the generator function:

  1. Execute a piece of code inside the generator function. If the **yield** keyword is encountered, the JavaScript engine will return the content after the keyword to the outside and suspend the execution of the function.
  2. External functions can resume function execution through the next method.

Coroutines are more lightweight than threads . You can think of a coroutine as a task running on a thread. There can be multiple coroutines on a thread, but only one coroutine can be executed on a thread at the same time. Then A coroutine needs to hand over the control of the main thread to B coroutine, which is reflected in the suspension of execution of A coroutine and the resumption of execution of B coroutine; similarly, A coroutine can also be started from B coroutine. Usually, if a coroutine B is started from a coroutine A, we call the coroutine A the parent coroutine of the B coroutine .

Just as a process can have multiple threads, a thread can also have multiple coroutines. The most important thing is that the coroutine is not managed by the operating system kernel, but completely controlled by the program (that is, executed in user mode). The advantage of this is that the performance has been greatly improved, and it will not consume resources like thread switching.

image-20230408231225174

It can be seen from the figure that there are four rules of coroutines:

  1. A coroutine gen is created by calling the generator function genDemo. After creation, the gen coroutine is not executed immediately.
  2. To let the gen coroutine execute, it needs to call gen.next.
  3. When the coroutine is executing, you can use the yield keyword to suspend the execution of the gen coroutine and return the main information to the parent coroutine.
  4. If the return keyword is encountered during the execution of the coroutine, the JavaScript engine will end the current coroutine and return the content after return to the parent coroutine.

The gen coroutine and the parent coroutine are executed interactively on the main thread, not concurrently. Their previous switching is completed through the cooperation of yield and gen.next.

When the yield method is called in the gen coroutine, the JavaScript engine will save the current call stack information of the gen coroutine and restore the call stack information of the parent coroutine. Similarly, when gen.next is executed in the parent coroutine, the JavaScript engine will save the call stack information of the parent coroutine and restore the call stack information of the gen coroutine.

image-20230408231516271

Use generators and Promises to transform the Promise code at the beginning

//foo 函数
function* foo() {
    
    
    let response1 = yield fetch('https://www.geekbang.org')
    console.log('response1')
    console.log(response1)
    let response2 = yield fetch('https://www.geekbang.org/test')
    console.log('response2')
    console.log(response2)
}
 
// 执行 foo 函数的代码
let gen = foo()
function getGenPromise(gen) {
    
    
    return gen.next().value
}
getGenPromise(gen).then((response) => {
    
    
    console.log('response1')
    console.log(response)
    return getGenPromise(gen)
}).then((response) => {
    
    
    console.log('response2')
    console.log(response)
})

The foo function is a generator function, which realizes asynchronous operation in the form of synchronous code in the foo function; but outside the foo function, we also need to write a piece of code to execute the foo function, as shown in the second half of the above code, Then let's analyze how this code works.

  • The first thing to do is let gen = foo()to create the gen coroutine.
  • Then pass the control of the main thread to the gen coroutine by executing gen.next in the parent coroutine.
  • After the gen coroutine gets control of the main thread, it calls the fetch function to create a Promise object response1, then suspends the execution of the gen coroutine through yield, and returns response1 to the parent coroutine.
  • After the parent coroutine resumes execution, call the response1.then method to wait for the request result.
  • After the request initiated by fetch is completed, the callback function in then will be called. After the callback function in then gets the result, it will give up the control of the main thread by calling gen.next, and hand over the control to the gen coroutine to continue to execute the next ask.

Encapsulate the code that executes the generator into a function, and call this function that executes the generator code an executor

function* foo() {
    
    
    let response1 = yield fetch('https://www.geekbang.org')
    console.log('response1')
    console.log(response1)
    let response2 = yield fetch('https://www.geekbang.org/test')
    console.log('response2')
    console.log(response2)
}
co(foo());

By using the generator to cooperate with the executor, it is possible to write asynchronous code in a synchronous manner, which greatly enhances the readability of the code.

async

async is a function that executes asynchronously and implicitly returns a Promise as a result.

await

async function foo() {
    
    
    console.log(1)
    let a = await 100
    console.log(a)
    console.log(2)
}
console.log(0)
foo()
console.log(3)

image-20230408232543530

Execute console.log(0)this statement and print out 0.

The next step is to execute the foo function. Since the foo function is marked with async, when entering the function, the JavaScript engine will save the current call stack and other information, then execute the statement in the foo function, and print out 1 console.log(1).

await 100Next, the statement in the foo function is executed . When executing await 100this statement, the JavaScript engine does too many things silently behind the scenes. Let’s take this statement apart to see what JavaScript has done.

When executed await 100, a Promise object will be created by default, the code is as follows:

let promise_ = new Promise((resolve,reject){
    
    
  resolve(100)
})

In the process of creating this promise_ object, we can see that the resolve function is called in the executor function, and the JavaScript engine will submit the task to the microtask queue

Then the JavaScript engine will suspend the execution of the current coroutine, transfer the control of the main thread to the parent coroutine for execution, and return the promise_ object to the parent coroutine

The control of the main thread has been handed over to the parent coroutine. At this time, one thing the parent coroutine has to do is to call promise_.then to monitor the change of the promise state.

Next, continue to execute the process of the parent coroutine, here we execute console.log(3)and print out 3. Then the execution of the parent coroutine will end. Before the end, it will enter the checkpoint of the microtask, and then execute the microtask queue. resolve(100)Some function, as follows:

promise_.then((value)=>{
    
    
   // 回调函数被激活后
  // 将主线程控制权交给 foo 协程,并将 vaule 值传给协程
})

After the callback function is activated, it will hand over the control of the main thread to the coroutine of the foo function, and at the same time pass the value value to the coroutine.

After the foo coroutine is activated, it will assign the previous value to the variable a, and then the foo coroutine will continue to execute subsequent statements. After the execution is complete, the control will be returned to the parent coroutine.

Task Scheduling: With setTimeOut, why use rAF

If you want to use JavaScript to achieve high-performance animation, you have to use the requestAnimationFrame API, which we call rAF for short, so why do you recommend using rAF instead of setTimeOut?

To explain this problem clearly, we must start with the task scheduling system of the rendering process. After understanding the task scheduling system of the rendering process, you will naturally understand the difference between rAF and setTimeOut.

Head-of-line blocking problem of single message queue

The rendering main thread will execute the tasks in the message queue in the order of first in first out. Specifically, when a new task is generated, the rendering process will add it to the end of the message queue. During the execution of the task, the rendering process will sequentially Take tasks from the head of the message queue and execute them sequentially.

image-20230409200439164

Based on this single message queue architecture, if the user sends a click event or zoom page event, at this time, there may be many less important tasks waiting to be executed in front of the task, such as V8 garbage Recycling, DOM timer and other tasks, if it takes too long to execute these tasks, it will make the user feel stuck.

image-20230409200459408

Under the single message queue architecture, there are situations where low-priority tasks will block high-priority tasks

How Chromium solves the head-of-line blocking problem

From 2013 to the present, the Chromium team has spent a lot of energy on continuously refactoring the underlying message mechanism

First iteration: introduce a high-priority queue

First of all, in the most ideal situation, we hope to be able to quickly track high-priority tasks. For example, in the interaction phase, the following tasks should be regarded as high-priority tasks:

Click tasks and scroll page tasks triggered by the mouse;

Page scaling tasks triggered by gestures;

Tasks such as animation effects triggered by operations such as CSS and JavaScript.

After these tasks are triggered, users want to get feedback on the page immediately, so we need to allow these tasks to be executed first and other tasks. To achieve this effect, we can add a high-priority message queue, add high-priority tasks to this queue, and then execute the tasks in the message queue first. The final effect is shown in the figure below:

image-20230409200640601

Using a high-priority message queue and a low-priority message queue, the rendering process will add tasks it considers urgent to the high-priority queue, and tasks that are not urgent will be added to the low-priority queue. Then we introduce a task scheduler into the rendering process , which is responsible for selecting appropriate tasks from multiple message queues. The logic usually implemented is to first take out tasks from the high-priority queue in order. If the high-priority queue is If it is empty, then take out tasks from the low-priority queue in order.

We can go one step further and divide tasks into multiple different priorities to achieve more fine-grained task scheduling. For example, they can be divided into high priority, normal priority and low priority. The final effect is shown in the following figure:

image-20230409200917036

Now we have introduced multiple message queues. Combined with the task scheduler, we can flexibly schedule tasks, so that we can let high-priority tasks be executed in advance. This method seems to solve the queue head blocking problem of the message queue.

Most tasks need to maintain their relative execution order. If user-input messages or composite messages are added to multiple queues with different priorities, the relative execution order of such tasks will be disrupted, and there may even be unresolved problems. The input event is processed, and the picture to be displayed by the event is synthesized. So we need to keep some tasks of the same type in their relative execution order.

The second iteration: implement the message queue according to the message type

To solve the above problems, we can create message queues with different priorities for different types of tasks, such as:

  • A message queue for input events can be created to store input events.

  • A message queue for synthetic tasks can be created to store synthetic events.

  • A default message queue can be created to hold events such as resource loading events and timer callbacks.

  • You can also create an idle message queue to store events with low real-time performance such as automatic garbage collection of V8.

image-20230409201049261

Through iteration, this strategy is quite practical, but it still has a problem, that is, the priorities of these message queues are fixed, and the task scheduler will schedule them according to this fixed static priority. Task.

The approximate life cycle of a page is roughly divided into two phases, the loading phase and the interaction phase.

Although there is no big problem in adopting the above-mentioned static priority strategy in the interactive phase, but in the page loading phase, if user input events and synthetic events still need to be executed first, then the parsing speed of the page will be slowed down.

Third Iteration: Dynamic Scheduling Strategy

image-20230409201449896

This picture shows how Chromium adjusts the priority of the message queue in different scenarios. Through this dynamic scheduling strategy, the core demands of different scenarios can be met, and this is also the task scheduling strategy currently adopted by Chromium.

The scene of the page loading stage . At this stage, the highest appeal of the user is to see the page in the shortest possible time. As for interaction and synthesis, it is not the core appeal of this stage. Therefore, we need to adjust the strategy and parse the page during the loading stage. , JavaScript script execution and other tasks are adjusted to the queue with the highest priority, and the priority of these queues for interactive synthesis is reduced.

After the page is loaded, it enters the interactive stage . Before introducing how Chromium adjusts the task scheduling strategy in the interactive stage, review the page rendering process.

In the graphics card, there is a place called the front buffer , where the image to be displayed by the monitor is stored. The monitor will read this front buffer according to a certain frequency, and display the image in the front buffer on the monitor. The reading frequency of different monitors is different, usually 60HZ, that is to say, the monitor will read the front buffer every 1/60 second.

If the browser wants to update the displayed picture, the browser will submit the newly generated picture to the back buffer of the graphics card . After the submission is completed, the GPU will swap the positions of the back buffer and the front buffer , that is, the front buffer It becomes the back buffer, and the back buffer becomes the front buffer, which ensures that the display can read the latest picture in the GPU next time.

At this time, we will find that the display reads the picture from the front buffer, and the process of the browser generating a new image to the back buffer is asynchronous, as shown in the following figure:

image-20230409201852597

The images read by the display and generated by the browser are not synchronized, which can easily cause many problems.

  • If the frame rate generated by the rendering process is slower than the refresh rate of the screen, the screen will display the same image in two frames. When this intermittent situation continues to occur, the user will clearly perceive that the animation is stuck.

  • If the frame rate generated by the rendering process is actually faster than the screen refresh rate, then some visual problems will also occur. For example, when the frame rate is 100fps and the refresh rate is only 60Hz, not all images rendered by the GPU are displayed. This will cause dropped frames.

  • Even if the refresh rate of the screen is the same as that of the GPU to update pictures, since they are two different systems, it is difficult to synchronize the frame generation cycle of the screen with the VSync cycle.

Therefore, if VSync is not synchronized with the system clock, it will cause frame drop, freeze, incoherence and other problems.

In order to solve these problems, it is necessary to bind the clock synchronization cycle of the display with the cycle of generating the page by the browser, which is also implemented by Chromium

**When the display has finished drawing a frame and before it is ready to read the next frame, the display will send a vertical synchronization signal (vertical synchronization) to the GPU, referred to as VSync. **At this time, the browser will make full use of the VSync signal.

Specifically, when the GPU receives the VSync signal, it will synchronize the VSync signal to the browser process, and the browser process will then synchronize it to the corresponding rendering process. After the rendering process receives the VSync signal, it can prepare to draw a new frame. The frame is finished, you can refer to the following figure for the specific process:

image-20230409202056342

When the rendering process receives the task of user interaction, the next step is to perform drawing and compositing operations with high probability, so we can set the priority of the compositing task to the highest when executing the task of user interaction.

Next, after processing the DOM, calculating the layout and drawing, it is necessary to submit the information to the composition thread to synthesize the final image, and then the composition thread enters the working state. The current scenario is that the compositing thread is working, so we can adjust the priority of the next compositing task to the lowest, and increase the priority of tasks such as page parsing and timers.

After the compositing is completed, the compositing thread will submit a compositing completion message to the rendering main thread. If the current compositing operation is performed very quickly, for example, it only takes 8 milliseconds from the user to send a message to complete the compositing operation, because the VSync synchronization period is 16.66 ( 1/60) milliseconds, then there is no need to generate a new page again during this VSync clock cycle. Then from the end of synthesis to the next VSync cycle, it enters an idle time period, then you can perform some less urgent tasks during this idle time, such as V8 garbage collection, or through window.requestIdleCallback() setting Callback tasks, etc., will be executed during this idle time.

Fourth iteration: task starvation

The above scheme seems to be perfect, but there is still a problem, that is, in a certain state, there are always new high-priority tasks added to the queue, which will cause other low-priority tasks to not get Execution, this is called task starvation.

The user will clearly notice that the animation is stuck.

  • If the frame rate generated by the rendering process is actually faster than the screen refresh rate, then some visual problems will also occur. For example, when the frame rate is 100fps and the refresh rate is only 60Hz, not all images rendered by the GPU are displayed. This will cause dropped frames.

  • Even if the refresh rate of the screen is the same as that of the GPU to update pictures, since they are two different systems, it is difficult to synchronize the frame generation cycle of the screen with the VSync cycle.

Therefore, if VSync is not synchronized with the system clock, it will cause frame drop, freeze, incoherence and other problems.

In order to solve these problems, it is necessary to bind the clock synchronization cycle of the display with the cycle of generating the page by the browser, which is also implemented by Chromium

**When the display has finished drawing a frame and before it is ready to read the next frame, the display will send a vertical synchronization signal (vertical synchronization) to the GPU, referred to as VSync. **At this time, the browser will make full use of the VSync signal.

Specifically, when the GPU receives the VSync signal, it will synchronize the VSync signal to the browser process, and the browser process will then synchronize it to the corresponding rendering process. After the rendering process receives the VSync signal, it can prepare to draw a new frame. The frame is finished, you can refer to the following figure for the specific process:

[External link image transfer...(img-17pwAA4g-1683724171996)]

When the rendering process receives the task of user interaction, the next step is to perform drawing and compositing operations with high probability, so we can set the priority of the compositing task to the highest when executing the task of user interaction.

Next, after processing the DOM, calculating the layout and drawing, it is necessary to submit the information to the composition thread to synthesize the final image, and then the composition thread enters the working state. The current scenario is that the compositing thread is working, so we can adjust the priority of the next compositing task to the lowest, and increase the priority of tasks such as page parsing and timers.

After the compositing is completed, the compositing thread will submit a compositing completion message to the rendering main thread. If the current compositing operation is performed very quickly, for example, it only takes 8 milliseconds from the user to send a message to complete the compositing operation, because the VSync synchronization period is 16.66 ( 1/60) milliseconds, then there is no need to generate a new page again during this VSync clock cycle. Then from the end of synthesis to the next VSync cycle, it enters an idle time period, then you can perform some less urgent tasks during this idle time, such as V8 garbage collection, or through window.requestIdleCallback() setting Callback tasks, etc., will be executed during this idle time.

Fourth iteration: task starvation

The above scheme seems to be perfect, but there is still a problem, that is, in a certain state, there are always new high-priority tasks added to the queue, which will cause other low-priority tasks to not get Execution, this is called task starvation.

In order to solve the problem of task starvation, Chromium sets the execution weight for each queue, that is, if a certain number of high-priority tasks are executed continuously, then a low-priority task will be executed in the middle, which eases the task Situation of starvation.

Guess you like

Origin blog.csdn.net/weixin_46488959/article/details/130609979