The page module in the browser

The page module in the browser

Chrome Developer Tools

Chrome Developer Tools has many important panels, such as the network panel, Performance panel, memory panel, etc. related to performance, and the Elements panel, Sources panel, Console panel, etc. related to the debugging page.

image-20230409131128402

image-20230409131223970

network panel

The Network panel consists of 6 areas: Controller, Filter, Snapshot Information, Timeline, Detailed List and Download Information Summary

image-20230409131330917

controller

image-20230409131519100

  • The button with a red dot means "start/pause packet capture"
  • The "Global Search" button, this function is very important, you can search for relevant content in all downloaded resources, and you can quickly locate certain files you want.
  • Disable cache, that is, the function of "prohibiting loading resources from Cache", it is very useful when debugging Web applications, because enabling Cache will affect the results of network performance tests.
  • The Online button is the function of "simulating 2G/3G". It can limit the bandwidth and simulate the display of the page under the condition of weak network, and then you can dynamically adjust the strategy according to the actual display situation, so as to make the web application more suitable for these weak networks .

filter

The main function is to filter. Because sometimes a page has too much content displayed in the detailed list area, and you may only want to view JavaScript files or CSS files, then you can use the filter module to filter the file types you want.

Snapshot information

The snapshot information area can be used to analyze what the user sees while waiting for the page to load, and to analyze the actual experience of the user. For example, if the screenshot is still blank after the page loads for more than 1 second, then you need to analyze whether it is a network or code problem. (Tick "Capture screenshots" on the panel to enable screenshots.)

timeline

The timeline is mainly used to display the relationship between HTTP, HTTPS, and WebSocket loading status and time, and is used to intuitively feel the loading process of the page. If multiple vertical lines are stacked together, it means that these resources are loaded at the same time. As for the loading information specific to each file, you need to use the detailed list to be discussed below.

detailed list

It records in detail the status of each resource from the initiation of the request to the completion of the request, as well as the data information of the final request completion. With this list, you can easily diagnose some network problems.

properties of the list

image-20230409132010773

By default, the list is sorted by request time, with the resource with the earliest request at the top. Of course, it can also be sorted by basic attributes such as return status code, request type, request duration, content size, etc., just click on the corresponding attribute.

details

image-20230409132108314

You can view the request line and request header information of any item in the request list here, and you can also view the response line, response header, and response body

Timeline of a single resource

image-20230409132213390

After initiating an HTTP request, the browser first searches the cache. If the cache is not hit, then continue to initiate a DNS request to obtain the IP address, then use the IP address to establish a TCP connection with the server, and then send an HTTP request, waiting for the server to respond; however, if the server If the redirection information is included in the response header, the entire process needs to be repeated. This is the basic flow of an HTTP request in the browser.

image-20230409132301054

The first one is Queuing , which means queuing. When a browser initiates a request, there are many reasons why the request cannot be executed immediately, but needs to be queued. Requests can be queued for a number of reasons.

  • First of all, the resources in the page have priority. For example, CSS, HTML, JavaScript, etc. are the core files in the page, so they have the highest priority; while resources such as pictures, videos, and audio are not core resources, the priority is higher. Low. Usually when the latter encounters the former, it needs to "give way" and enter the waiting queue state.
  • Secondly, the browser will maintain up to 6 TCP connections for each domain name. If all 6 TCP connections are busy when an HTTP request is initiated, the request will be queued.
  • Finally, while the network process is allocating disk space for data, new HTTP requests also need to wait briefly for the disk allocation to complete.

After waiting for the queuing to complete, it is time to enter the state of initiating a connection. However, before the connection is initiated, there are still some reasons that may cause the connection process to be delayed. This delay is displayed on the Stalled panel , which means stagnation.

What needs to be added here is that if you use a proxy server, a Proxy Negotiation stage will be added, that is, the proxy negotiation stage, which indicates the time used for the proxy server connection negotiation, but it is not reflected in the above figure, because there is no Use a proxy server.

Next, it comes to the Initial connection/SSL stage , which is the stage of establishing a connection with the server, which includes the time it takes to establish a TCP connection; however, if you use the HTTPS protocol, an additional SSL handshake time is required, This process is mainly used to negotiate some encrypted information.

After establishing a connection with the server, the network process will prepare the request data and send it to the network. This is the Request sent phase . Usually this stage is very fast, because it only needs to send the data in the browser buffer to the end, and there is no need to judge whether the server has received it, so this time is usually less than 1 millisecond.

After the data is sent out, the next step is to wait to receive the first byte of data from the server. This stage is called Waiting (TTFB), usually also called " time to first byte ". TTFB is an important indicator reflecting the response speed of the server. For the server, the shorter the TTFB time, the faster the server responds.

After receiving the first byte, it enters the stage of receiving complete data one after another, that is, the Content Download stage , which means the time from the time of the first byte to receiving all the response data.

Optimize time-consuming items on the timeline
Queuing time is too long

The long queuing time is probably caused by the browser maintaining up to 6 connections for each domain name. So for this reason, you can place the resources under one site under multiple domain names, for example, under three domain names, so that 18 connections can be supported at the same time. This solution is called domain name sharding technology . In addition to the domain name sharding technology, it is recommended that you upgrade your site to HTTP2 , because HTTP2 no longer has the limit of maintaining a maximum of 6 TCP connections per domain name

Time to First Byte (TTFB) too long
  • The server took too long to generate the page data . For a dynamic web page, when the server receives a request from a user to open a page, it first reads the data required by the page from the database, then passes the data into the template, and returns the template to the user after the template is rendered. In the process of processing this data, the server may have a problem in a certain link.
    • Find ways to increase the processing speed of the server, such as by increasing various caching technologies
  • The reason for the network . For example, if you use a low-bandwidth server, or if you originally used a Telecom server, network users who can connect to China Unicom want to access your server, which will also slow down the network speed.
    • You can use CDN to cache some static files
  • Extra user information was sent when the request header was sent . For example, for some unnecessary cookie information, the server may need to process each item after receiving these cookie information, which increases the processing time of the server.
    • When sending requests, try to reduce unnecessary cookie data information as much as possible
Content Download takes too long

If the Content Download of a single request takes a lot of time, it may be caused by too many bytes. At this time, you need to reduce the file size, such as compression, removing unnecessary comments in the source code, etc.

Download information brief

In the download information summary, you should focus on the DOMContentLoaded and Load events, and the completion time of these two events.

  • DOMContentLoaded, after this event occurs, it means that the DOM has been built on the page, which means that the HTML files, JavaScript files, and CSS files needed to build the DOM have all been downloaded.
  • Load, indicating that the browser has loaded all resources (images, style sheets, etc.).

By downloading the info summary panel, you can see how long it took for these two events to fire.

DOM tree

The byte stream of the HTML file transmitted from the network to the rendering engine cannot be directly understood by the rendering engine, so it must be converted into an internal structure that the rendering engine can understand. This structure is DOM. DOM provides a structured representation of HTML documents. In the rendering engine, DOM has three levels of functions.

  • From the perspective of the page, the DOM is the basic data structure that generates the page.
  • From the perspective of JavaScript scripting, DOM provides an interface for JavaScript scripting operations. Through this set of interfaces, JavaScript can access the DOM structure, thereby changing the structure, style and content of the document.
  • From a security perspective, DOM is a security line, and some unsafe content is rejected during the DOM parsing phase.

How the DOM tree is generated

The HTML parser does not wait for the entire document to be loaded before parsing, but how much data is loaded by the network process, the HTML parser parses as much data

After the network process receives the response header, it will judge the type of the file according to the content-type field in the response header. For example, if the value of content-type is "text/html", then the browser will judge that this is an HTML type file , and then select or create a renderer process for the request. After the rendering process is ready, a pipeline for sharing data will be established between the network process and the rendering process . After the network process receives the data, it will put it into this pipeline, and the rendering process will continuously read data from the other end of the pipeline, and At the same time, the read data is "feed" to the HTML parser.

image-20230409133152357

Converting byte stream to DOM requires three stages.

In the first stage, the byte stream is converted into Token through a tokenizer.

The first step in the process of V8 compiling JavaScript is to do lexical analysis and decompose JavaScript into tokens. The same is true for parsing HTML. It is necessary to first convert the byte stream into tokens through a tokenizer, which can be divided into Tag Token and Text Token.

image-20230409133338739

Tag Token is divided into StartTag and EndTag, such as `` is StartTag, 就是EndTag, for the blue and red blocks in the figure, and the green block corresponding to the text Token

The second and third phases are carried out synchronously, and the Token needs to be parsed into DOM nodes, and the DOM nodes are added to the DOM tree

The HTML parser maintains a Token stack structure , which is mainly used to calculate the parent-child relationship between nodes, and the Tokens generated in the first stage will be pushed into this stack in order. The specific processing rules are as follows:

  • If the StartTag Token is pushed into the stack , the HTML parser will create a DOM node for the Token, and then add the node to the DOM tree, and its parent node is the node generated by the adjacent element in the stack.
  • If the text token is parsed by the tokenizer , a text node will be generated, and then added to the DOM tree. The text token does not need to be pushed into the stack, and its parent node is the DOM corresponding to the current stack top token. node.
  • If the tag parsed by the tokenizer is an EndTag tag , such as an EndTag div, the HTML parser will check whether the element at the top of the Token stack is a StarTag div, and if so, pop the StartTag div from the stack, indicating that the parsing of the div element is completed.

The new Token generated by the tokenizer is continuously pushed and popped from the stack, and the whole parsing process continues until the tokenizer completes the word segmentation of all byte streams.

<html>
<body>
    <div>1</div>
    <div>test</div>
</body>
</html>

This code is passed to the HTML parser in the form of a byte stream. After processing by the tokenizer, the first parsed Token is StartTag html. The parsed Token will be pushed into the stack and create an html at the same time. DOM node, which is added to the DOM tree.

When the HTML parser starts working, it will create an empty DOM structure rooted at document by default , and will push a Token of StartTag document to the bottom of the stack. Then the first StartTag html Token parsed by the tokenizer will be pushed onto the stack, and an html DOM node will be created and added to the document, as shown in the following figure:

image-20230409133808263

image-20230409133908191

Next, the text Token of the first div is parsed out. The rendering engine will create a text node for the Token and add the Token to the DOM. Its parent node is the node corresponding to the top element of the current Token stack

image-20230409133949056

Next, the tokenizer parses out the first EndTag div. At this time, the HTML parser will judge whether the element on the top of the stack is the StartTag div. If so, the StartTag div will pop up from the top of the stack, as shown in the following figure:

image-20230409134016430

image-20230409134029909

How JavaScript Affects DOM Generation

JavaScript will block DOM generation, and style files will block the execution of JavaScript. Therefore, in actual projects, you need to focus on JavaScript files and style sheet files. Improper use will affect page performance.

<html>
<body>
    <div>1</div>
    <script>
    let div1 = document.getElementsByTagName('div')[0]
    div1.innerText = 'time.geekbang'
    </script>
    <div>test</div>
</body>
</html>

A JavaScript script is inserted between the two divs, and the parsing process of this script is a little different. <script>Before the tag, all the parsing process is still the same as before, but when the tag is parsed <script>, the rendering engine judges that it is a script, and the HTML parser will suspend the parsing of the DOM at this time, because the next JavaScript may need to modify the currently existing The resulting DOM structure.

image-20230409134401595

At this time, the HTML parser suspends its work, and the JavaScript engine intervenes to execute the script in the script tag. Because this JavaScript script modifies the content in the first div in the DOM, after executing this script, the content of the div node is already Changed to time.geekbang. After the script is executed, the HTML parser resumes the parsing process and continues to parse the subsequent content until the final DOM is generated.

It is usually necessary to introduce JavaScript files into the page, and the parsing process is a little more complicated

//foo.js
let div1 = document.getElementsByTagName('div')[0]
div1.innerText = 'time.geekbang'
<html>
<body>
    <div>1</div>
    <script type="text/javascript" src='foo.js'></script>
    <div>test</div>
</body>
</html>

When the JavaScript tag is executed, the parsing of the entire DOM is suspended, and the JavaScript code is executed. However, when JavaScript is executed here, this JavaScript code needs to be downloaded first. Here we need to focus on the download environment, because the download process of JavaScript files will block DOM parsing , and usually downloading is very time-consuming and will be affected by factors such as the network environment and the size of JavaScript files.

However, the Chrome browser has done a lot of optimizations, one of the main optimizations is the pre-parsing operation . When the rendering engine receives the byte stream, it will start a pre-parsing thread to analyze the JavaScript, CSS and other related files contained in the HTML file. After parsing the relevant files, the pre-parsing thread will download these files in advance.

Going back to DOM parsing, the introduction of JavaScript threads will block the DOM, but there are also some related strategies to avoid, such as using CDN to speed up the loading of JavaScript files and compress the volume of JavaScript files. In addition, if there is no DOM-related code in the JavaScript file, you can set the JavaScript script to be loaded asynchronously, and mark the code with async or defer. The usage is as follows:

<!-- 使用 async 标志的脚本文件一旦加载完成,会立即执行;-->
<script async type="text/javascript" src='foo.js'></script>

<!-- 使用了 defer 标记的脚本文件,需要在 DOMContentLoaded 事件之前执行。-->
<script defer type="text/javascript" src='foo.js'></script>

//theme.css
div {
    
    color:blue}

<html>
    <head>
        <style src='theme.css'></style>
    </head>
<body>
    <div>1</div>
    <script>
            let div1 = document.getElementsByTagName('div')[0]
            div1.innerText = 'time.geekbang' // 需要 DOM
            div1.style.color = 'red'  // 需要 CSSOM
        </script>
    <div>test</div>
</body>
</html>

In this example, the statement of appears in the JavaScript code div1.style.color = ‘red', which is used to manipulate CSSOM, so before executing the JavaScript, it is necessary to parse all the CSS styles above the JavaScript statement. Therefore, if an external CSS file is referenced in the code, before executing JavaScript, it is necessary to wait for the download of the external CSS file to be completed, and to parse and generate a CSSOM object before executing the JavaScript script.

Before the JavaScript engine parses the JavaScript, it does not know whether the JavaScript has manipulated the CSSOM, so when the rendering engine encounters a JavaScript script, no matter whether the script manipulates the CSSOM, it will download the CSS file, parse the operation, and then execute the JavaScript script.

So the JavaScript script is dependent on the style sheet, which adds another blocking process.

CSS in the rendering pipeline

//theme.css
div{
    
     
    color : coral;
    background-color:black
}
<html>
<head>
    <link href="theme.css" rel="stylesheet">
</head>
<body>
    <div>geekbang com</div>
</body>
</html>

image-20230409135542387

The first is to initiate a request for the main page. The requester may be the rendering process or the browser process. The initiated request is sent to the network process for execution. After the network process receives the returned HTML data, it sends it to the rendering process, and the rendering process parses the HTML data and builds the DOM. Special attention needs to be paid here. There is a period of idle time between requesting HTML data and building DOM. This idle time may become a bottleneck for page rendering.

When the rendering process receives the byte stream of an HTML file, it will first start a pre-parsing thread . If it encounters a JavaScript file or a CSS file, the pre-parsing thread will download the data in advance. For the above code, the pre-parsing thread will parse out an external theme.css file and initiate the download of theme.css. There is also a free time here that needs your attention, that is, after the DOM construction is completed and the theme.css file has not been downloaded, the rendering pipeline has nothing to do, because the next step is to synthesize the layout tree, and the synthesized layout tree CSSOM and DOM are required, so here we need to wait for the CSS to load and parse it into CSSOM.

The rendering engine cannot directly understand the content of the CSS file, so it needs to be parsed into a structure that the rendering engine can understand. This structure is CSSOM. Like DOM, CSSOM also has two functions. The first is to provide JavaScript with the ability to manipulate style sheets, and the second is to provide basic style information for the synthesis of layout trees . This CSSOM is reflected in the DOM document.styleSheets.

After the DOM and CSSOM are built, the rendering engine will construct the layout tree. The structure of the layout tree is basically a copy of the structure of the DOM tree, the difference is that elements in the DOM tree that do not need to be displayed will be filtered out, such as elements with display:none attribute, head tags, script tags, etc. After copying the basic layout tree structure, the rendering engine will select the corresponding style information for the corresponding DOM element. This process is style calculation . After the style calculation is completed, the rendering engine also needs to calculate the geometric position corresponding to each element in the layout tree. This process is to calculate the layout . The construction of the final layout tree is completed through style calculation and layout calculation. After that, it's time for subsequent drawing operations.

//theme.css
div{
    
     
    color : coral;
    background-color:black
}
<html>
<head>
    <link href="theme.css" rel="stylesheet">
</head>
<body>
    <div>geekbang com</div>
    <script>
        console.log('time.geekbang.org')
    </script>
    <div>geekbang com</div>
</body>
</html>

Added a simple JavaScript inside the body tag. With JavaScript, the rendering pipeline is a bit different, you can refer to the following rendering pipeline diagram:

image-20230409140357519

In the process of parsing DOM, if you encounter a JavaScript script, you need to suspend DOM parsing to execute JavaScript, because JavaScript may modify the current state of DOM.

However, before executing the JavaScript script, if the page contains references to external CSS files, or the CSS content is built into the style tag, the rendering engine needs to convert these contents into CSSOM, because JavaScript has the ability to modify CSSOM, so when executing Before JavaScript, you also need to rely on CSSOM. That is to say, CSS will also block the generation of DOM in some cases.

Let's take a look at a more complicated situation. If the JavaScript external reference file is included in the body, the Demo code is as follows:

//theme.css
div{
    
     
    color : coral;
    background-color:black
}

//foo.js
console.log('time.geekbang.org')

<html>
<head>
    <link href="theme.css" rel="stylesheet">
</head>
<body>
    <div>geekbang com</div>
    <script src='foo.js'></script>
    <div>geekbang com</div>
</body>
</html>

image-20230409140624898

During the pre-parsing process after receiving the HTML data, the HTML pre-parser recognizes that there are CSS files and JavaScript files that need to be downloaded, and then initiates a download request for these two files at the same time. It should be noted that the two files The download process is overlapping, so the download time is calculated according to the longest file.

The subsequent pipeline is the same as the previous one. Regardless of which CSS file or JavaScript file arrives first, it must first wait until the CSS file is downloaded and the CSSOM is generated, then execute the JavaScript script, and finally continue to build the DOM, build the layout tree, and draw page.

Factors Affecting Page Display and Optimization Strategies

The rendering pipeline affects the speed of the first page display, and the speed of the first page display directly affects the user experience

Take a look at the three stages that go through visually from the time a URL request is made to when the content of the page is displayed for the first time.

  • In the first stage, after the request is sent, it comes to the data submission stage. At this time, the content of the previous page is still displayed on the page.
    • The factors affecting the first stage are mainly network or server processing.
  • In the second stage, after the data is submitted, the rendering process will create a blank page. We usually refer to this period as parsing the white screen , and wait for the loading of CSS files and JavaScript files to complete, generate CSSOM and DOM, and then synthesize the layout tree. Finally There is also a series of steps to prepare for the first render.
    • The main problem at this stage is the white screen time. If the white screen time is too long, it will affect the user experience. In order to shorten the white screen time, analyze the main tasks of this stage, including parsing HTML, downloading CSS, downloading JavaScript, generating CSSOM, executing JavaScript, generating a layout tree, and drawing a series of operations on the page.
    • Usually the bottleneck is mainly reflected in downloading CSS files, downloading JavaScript files and executing JavaScript .
    • So if you want to shorten the duration of the white screen, you can have the following strategies:
      • These two types of file downloads are removed by inline JavaScript and inline CSS, so that the rendering process can start directly after getting the HTML file.
      • But not all occasions are suitable for inlining, so you can also reduce the file size as much as possible, such as removing unnecessary comments through webpack and other tools, and compressing JavaScript files.
      • You can also sync or defer some JavaScript tags that don't need to be used in the HTML parsing stage.
      • For a large CSS file, you can split it into multiple CSS files for different purposes through media query attributes, so that a specific CSS file will be loaded only in a specific scenario.
  • In the third stage, after the first rendering is completed, the complete page generation stage begins, and then the page will be drawn little by little.

Layering and Compositing Mechanisms

How does the monitor display images

Each display has a fixed refresh rate, usually 60HZ, that is, 60 pictures are updated per second. The updated pictures come from a place called the front buffer in the graphics card . The task of the display is very simple, that is, every second Fixedly read the images in the front buffer 60 times, and display the read images on the display.

What does the graphics card do

The responsibility of the graphics card is to synthesize a new image and save the image to the back buffer . Once the graphics card writes the synthesized image to the back buffer, the system will exchange the back buffer and the front buffer, so that the display can be guaranteed Can read the image synthesized by the latest graphics card. Normally, the update frequency of the graphics card is the same as the refresh rate of the monitor. But sometimes, in some complex scenes, the speed at which the graphics card processes a picture will slow down, which will cause visual lag.

Frames VS Frame Rate

Each picture generated by the rendering pipeline is called a frame, and the number of frames updated by the rendering pipeline per second is called the frame rate. For example, if 60 frames are updated in 1 second during the scrolling process, the frame rate is 60Hz (or 60FPS).

How to generate a frame of image

There are three ways to generate any frame: rearrangement, redrawing and synthesis .

The rendering paths of these three methods are different. Generally, the longer the rendering path, the more time it takes to generate the image .

Rearrangement , it needs to recalculate the layout tree based on CSSOM and DOM, so that when generating a picture, every stage of the entire rendering pipeline will be executed again. If the layout is complex, it is difficult to guarantee the rendering efficiency.

Redrawing is slightly more efficient because there is no re-layout stage, but it still needs to recalculate the drawing information and trigger a series of operations after the drawing operation.

Compared with rearrangement and redrawing, the compositing operation path is very short, and there is no need to trigger the two stages of layout and drawing. If the GPU is used, the compositing efficiency will be very high.

The compositing techniques in Chrome can be summed up in three words: layering, chunking , and compositing .

layering and compositing

Usually the composition of the page is very complicated, and some pages need to implement some complex animation effects, such as the animation effect of the pop-up menu when the menu is clicked, the animation effect of page scrolling when the mouse wheel is rolled, and of course some cool 3D animations special effects. If the target image is generated directly from the layout tree without adopting the layering mechanism, then every time there is a small change in the page, the rearrangement or redrawing mechanism will be triggered. Seriously affect the rendering efficiency of the page.

In order to improve the rendering efficiency of each frame, Chrome introduces a layering and compositing mechanism.

Think of a web page as being superimposed with many pictures, each picture corresponds to a layer, and the Chrome compositor finally synthesizes these layers into the picture used to display the page. In this process, the operation of decomposing the material into multiple layers is called layering , and the operation of merging these layers together is called compositing . So, layering and compositing are usually used together.

Considering that a page is divided into two layers, when the next frame is rendered, the upper frame may need to implement certain transformations, such as translation, rotation, scaling, shadows, or Alpha gradients. At this time, the compositor only needs to It is enough to change the two layers accordingly, and the graphics card handles these operations with ease, so the compositing process takes a very short time.

How Chrome implements the layering and compositing mechanism

In Chrome's rendering pipeline, layering is reflected after the layout tree is generated , and the rendering engine will convert it into a layer tree (Layer Tree) according to the characteristics of the layout tree. The layer tree is the basic structure of the subsequent process of the rendering pipeline.

Each node in the layer tree corresponds to a layer, and the next drawing stage depends on the nodes in the layer tree. The drawing phase is not actually drawing the picture, but combining the drawing instructions into a list. For example, the background of a layer should be set to black, and a circle should be drawn in the middle, then the drawing process will generate such a |Paint BackGroundColor:Black | Paint Circle|drawing command list, the drawing process is complete.

After you have the drawing list, you need to enter the rasterization stage. Rasterization is to generate pictures according to the instructions in the drawing list. Each layer corresponds to a picture. After the composition thread has these pictures, it will synthesize these pictures into "one" picture, and finally send the generated picture to the back buffer. This is a rough layering and synthesis process.

The compositing operation is done on the compositing thread, which means that the execution of the compositing operation will not be affected by the main thread . This is why often the main thread gets stuck, but CSS animations can still execute.

Block

Layering improves the rendering efficiency from a macro level, while partitioning improves rendering efficiency from a micro level.

Under normal circumstances, the content of the page is much larger than the screen. When displaying a page, if you wait for all the layers to be generated before compositing, some unnecessary overhead will be generated, and the time for compositing images will also be reduced. become longer.

Therefore, the composition thread will divide each layer into fixed-size tiles, and then give priority to drawing tiles close to the viewport, which can greatly speed up the display speed of the page. But sometimes, it takes a lot of time even to draw only the tiles with the highest priority, because it involves a very critical factor - texture upload , because the operation of uploading from computer memory to GPU memory will be relatively slow.

To solve this problem, Chrome has adopted another strategy: use a low-resolution image when compositing tiles for the first time . For example, it can be half of the normal resolution, and the resolution is reduced by half, and the texture is reduced by three quarters. When the page content is displayed for the first time, the low-resolution image is displayed, and then the compositor continues to draw the normal-scale web content. When the normal-scale web content is drawn, the currently displayed low-resolution content is replaced. Although this method will allow the user to see low-resolution content at the beginning, it is better than the user seeing nothing at the beginning.

Optimize code with layering techniques

When writing a web application, you may often need to perform geometric shape transformation, transparency transformation or some scaling operations on an element. If you use JavaScript to write these effects, it will involve the entire rendering pipeline, so the drawing efficiency of JavaScript will be very low .

At this time, you can use will-change to tell the rendering engine that you will do some special effect transformation on the element. The CSS code is as follows:

.box {
    
    
will-change: transform, opacity;
}

This code is to tell the rendering engine in advance that the box element will perform geometric transformation and transparency transformation operations. At this time, the rendering engine will implement the element alone for one frame. When these transformations occur, the rendering engine will directly process the transformation through the compositing thread. The transformation does not involve the main thread, which greatly improves the efficiency of rendering. This is why CSS animations are more efficient than JavaScript animations .

Therefore, if it involves some situations that can use the compositing thread to handle CSS effects or animations, try to use will-change to tell the rendering engine in advance to prepare an independent layer for the element. But everything has two sides. Whenever the rendering engine prepares an independent layer for an element, the memory it occupies will also increase greatly, because starting from the layer tree, each subsequent stage will have an additional layer structure, which requires additional memory, so you need to use will-change appropriately.

  • The tasks that can be completed directly in the composition thread will not change the content of the layer, such as changes in text information, changes in layout, and changes in color, all of which will not be involved. Changes involving these contents will involve rearrangement or Repainted.

  • What can be implemented directly in the composition thread is the geometric transformation, transparency transformation, shadow, etc. of the entire layer, and these transformations will not affect the content of the layer.

  • For example, when scrolling the page, the content of the entire page does not change. What is done at this time is actually to move the layer up and down. This operation can be completed directly in the compositing thread.

Make pages display and respond faster

Usually a page has three phases: loading phase, interactive phase and closing phase .

  • The loading stage refers to the process from sending a request to rendering a complete page. The main factors affecting this stage are the network and JavaScript scripts.
  • The interaction stage is mainly the integration process from the completion of page loading to user interaction. The main factor affecting this stage is JavaScript script.
  • The closing phase is mainly some cleaning operations done by the page after the user issues a closing command.

loading stage

A typical rendering pipeline

image-20230409153203146

Not all resources will block the first rendering of the page, such as pictures, audio, video and other files will not block the first rendering of the page; JavaScript, HTML resource files and CSS files requested for the first time will block the first rendering, because in the construction HTML and JavaScript files are required in the process of DOM, and CSS files are required in the process of constructing the rendering tree.

These resources that can block the first rendering of a web page are called critical resources . Based on the key resources, three core factors that affect the first rendering of the page can be further refined.

The first is the number of key resources . The more key resources there are, the longer the first page load time will be. For example, the number of key resources in the above figure is 3, 1 HTML file, 1 JavaScript and 1 CSS file.

The second is the key resource size . In general, the smaller the content of all critical assets, the less time it takes to download the entire asset, and therefore less time to block rendering. The sizes of key resources in the above figure are 6KB, 8KB, and 9KB respectively, so the size of the entire key resource is 23KB.

The third is how many RTTs (Round Trip Time) are required to request key resources . When using the TCP protocol to transmit a file, for example, the file size is 0.1M, due to the characteristics of TCP, this data is not transmitted to the server at one time, but needs to be split into packets and transmitted multiple times. RTT is the round-trip delay here. It is an important performance indicator in the network, indicating the total time delay experienced from the sending end sending data to the sending end receiving the confirmation from the receiving end . Usually, an HTTP data packet is about 14KB, so a 0.1M page needs to be split into 8 packets for transmission, that is to say, 8 RTTs are required.

Combined with the above figure to see how many RTTs its key resource requests require. The first is to request HTML resources, the size is 6KB, less than 14KB, so 1 RTT can be solved. As for JavaScript and CSS files, one thing to note here is that since the rendering engine has a pre-parsing thread, after receiving the HTML data, the pre-parsing thread will quickly scan the key resources in the HTML data, and once scanned, it will immediately initiate a request. It can be considered that JavaScript and CSS initiate requests at the same time, so their requests overlap. When calculating their RTT, only the data with the largest volume needs to be calculated. The largest here is the CSS file (9KB), so it is calculated according to 9KB, and because 9KB is less than 14KB, JavaScript and CSS resources can also be counted as 1 RTT. That is to say, the key resource request in the figure above spent a total of 2 RTTs.

The general optimization principle is to reduce the number of key resources, reduce the size of key resources, and reduce the RTT times of key resources .

  • How to reduce the number of key resources? One way is to change JavaScript and CSS into inline form, such as JavaScript and CSS in the above picture, if both are changed to inline mode, then the number of key resources will be reduced from 3 to 1. Another way, if the JavaScript code does not operate on DOM or CSSOM, it can be changed to sync or defer attribute; also for CSS, if it is not loaded before building the page, you can add a flag to unblock display. When JavaScript tags are added with sync or defer, and the CSSlink attribute is preceded by an unblock display flag, they become non-critical resources.
  • How to reduce the size of key resources? You can compress CSS and JavaScript resources, remove some comments in HTML, CSS, and JavaScript files, or cancel key resources in CSS or JavaScript as mentioned earlier.
  • How to reduce the number of key resource RTT? It can be achieved by reducing the number of key resources and reducing the size and collocation of key resources. In addition, CDN can also be used to reduce the duration of each RTT.

interactive stage

The optimization of the interactive stage is actually talking about the speed of the rendering process to render the frame, because in the interactive stage, the rendering speed of the frame determines the fluency of the interaction.

In the rendering pipeline of the interaction stage, there is no process of loading key resources and building DOM and CSSOM in the interaction stage, and the interaction animation is usually triggered by JavaScript.

image-20230409154056512

In most cases, generating a new frame is triggered by JavaScript by modifying the DOM or CSSOM. There is another part of the frame that is triggered by CSS.

If a modification of the layout information is found during the calculation of the style stage, the rearrangement operation will be triggered , and then a series of operations in the subsequent rendering pipeline will be triggered. This cost is very high.

Similarly, if no modification of layout information is found in the calculation style stage, but only information such as color is modified, then no layout-related adjustments will be involved, so you can skip the layout stage and directly enter the drawing stage. This process is called heavy painted . However, the cost of the redrawing phase is not small.

There is another case where some special effects such as deformation, gradient, animation, etc. are achieved through CSS, which is triggered by CSS and executed on the compositing thread. This process is called compositing. Performing compositing is the most efficient way because it does not trigger reflows or repaints, and the compositing operation itself is very fast.

A big principle is to make the generation of individual frames faster .

  • Reduce JavaScript script execution time

Sometimes the execution time of a JavaScript function may be hundreds of milliseconds, which seriously occupies the time for the main thread to perform other rendering tasks. There are two strategies that can be used in this situation:

One is to decompose a function that is executed once into multiple tasks, so that each execution time should not be too long.

The other is to use Web Workers. Web Workers can be regarded as a thread other than the main thread. JavaScript scripts can be executed in Web Workers, but there is no DOM and CSSOM environment in Web Workers, which means that DOM cannot be accessed through JavaScript in Web Workers , so some time-consuming tasks that have nothing to do with DOM operations can be put into Web Workers for execution.

In the interaction phase, the general principle for JavaScript scripts is not to occupy the main thread for too long at a time.

  • Avoid Forced Sync Layout

Layout operations under normal conditions. After performing operations such as adding elements or deleting elements through the DOM interface, the style and layout need to be recalculated. However, under normal circumstances, these operations are completed asynchronously in another task. This is done to avoid the current task taking too long main thread time.

<html>
<body>
    <div id="mian_div">
        <li id="time_li">time</li>
        <li>geekbang</li>
    </div>
 
    <p id="demo"> 强制布局 demo</p>
    <button onclick="foo()"> 添加新元素 </button>
 
    <script>
        function foo() {
      
      
            let main_div = document.getElementById("mian_div")      
            let new_node = document.createElement("li")
            let textnode = document.createTextNode("time.geekbang")
            new_node.appendChild(textnode);
            document.getElementById("mian_div").appendChild(new_node);
        }
    </script>
</body>
</html>

For the above code, use the Performance tool to record the process of adding elements, as shown in the following figure:

image-20230409154711937

Executing JavaScript to add elements is performed in one task, and recalculating the style layout is performed in another task, which is the normal layout operation.

The so-called forced synchronous layout means that JavaScript forces the calculation style and layout operations to be advanced to the current task .

function foo() {
    let main_div = document.getElementById("mian_div")
    let new_node = document.createElement("li")
    let textnode = document.createTextNode("time.geekbang")
    new_node.appendChild(textnode);
    document.getElementById("mian_div").appendChild(new_node);
    // 由于要获取到 offsetHeight,
    // 但是此时的 offsetHeight 还是老的数据,
    // 所以需要立即执行布局操作
    console.log(main_div.offsetHeight)
}

After adding a new element to the DOM, called main_div.offsetHeightto get the height information of the new main_div. If you want to get the height of main_div, you need to re-layout, so before getting the height of main_div, JavaScript also needs to force the rendering engine to perform a layout operation by default. This operation is called forced synchronous layout .

Similarly, you can see the task status recorded by Performance below:

image-20230409154951708

Both computed styles and layouts are triggered during the execution of the current script, which is forced synchronous layout.

To avoid forcing a synchronous layout, you can adjust your strategy to query the relevant values ​​before modifying the DOM. The code looks like this:

function foo() {
    
    
    let main_div = document.getElementById("mian_div")
    // 为了避免强制同步布局,在修改 DOM 之前查询相关值
    console.log(main_div.offsetHeight)
    let new_node = document.createElement("li")
    let textnode = document.createTextNode("time.geekbang")
    new_node.appendChild(textnode);
    document.getElementById("mian_div").appendChild(new_node);
    
}
  • Avoid layout thrashing

Worse than forced synchronous layout, is layout thrashing. The so-called layout shaking refers to performing forced layout and shaking operations multiple times during a JavaScript execution process.

function foo() {
    
    
    let time_li = document.getElementById("time_li")
    for (let i = 0; i < 100; i++) {
    
    
        let main_div = document.getElementById("mian_div")
        let new_node = document.createElement("li")
        let textnode = document.createTextNode("time.geekbang")
        new_node.appendChild(textnode);
        new_node.offsetHeight = time_li.offsetHeight;
        document.getElementById("mian_div").appendChild(new_node);
    }
}

The attribute value is continuously read in a for loop statement, and the style and layout must be calculated before each attribute value is read. After the code is executed, the status recorded using Performance is as follows:

image-20230409155139553

Calculation styles and layouts are repeatedly executed inside the foo function, which can greatly affect the execution efficiency of the current function. The way to avoid this situation is the same as the mandatory synchronous layout, try not to query some related values ​​when modifying the DOM structure.

  • Reasonable use of CSS to synthesize animation

Composite animation is executed directly on the compositing thread, which is different from layout and drawing operations performed on the main thread. If the main thread is occupied by JavaScript or some layout tasks, CSS animation can still continue to execute. So try to make good use of CSS to synthesize animations. If you can let CSS handle animations, try to hand over CSS to operate.

In addition, if you can know in advance to perform an animation operation on an element, it is best to mark it as will-change, which tells the rendering engine that the element needs to be generated as a separate layer.

  • Avoid frequent garbage collections

JavaScript uses an automatic garbage collection mechanism. If temporary objects are frequently created in some functions, the garbage collector will also frequently execute the garbage collection strategy. In this way, when the garbage collection operation occurs, it will occupy the main thread, thereby affecting the execution of other tasks, and in severe cases, it will also cause the user to experience frame drop and unsmooth feeling.

So try to avoid generating those temporary garbage data. Optimize the storage structure as much as possible, and avoid the generation of small particle objects as much as possible.

Virtual DOM

Flaws of the DOM

Manipulating the DOM through JavaScript affects the entire rendering pipeline. In addition, DOM also provides a set of JavaScript interfaces for traversing or modifying nodes. This set of interfaces includes methods such as getElementById, removeChild, and appendChild.

You can call document.body.appendChild(node)to add an element to the body node, and a series of chain reactions will be triggered after calling this API. First, the rendering engine will add the node node to the body node, and then trigger tasks such as style calculation, layout, drawing, rasterization, and composition. This process is called rearrangement . In addition to rearrangement, it may also cause redrawing or compositing operations, which is vividly understood as " a single hair will affect the whole body ". In addition, improper operation of DOM may also cause problems of forced synchronous layout and layout jitter , which will greatly reduce rendering efficiency. Therefore, you need to be very careful when operating the DOM.

For some complex pages or currently using a lot of single-page applications, the DOM structure is very complicated, and the DOM tree needs to be constantly modified. Every time the DOM rendering engine is operated, it needs to be rearranged, redrawn or Synthesis and other operations, because the DOM structure is complex, the generated page structure will also be very complex. For these complex pages, performing a rearrangement or redrawing operation is very time-consuming, which brings real performance problems.

So there needs to be a way to reduce JavaScript's operations on DOM, and virtual DOM comes into play at this time

What is virtual DOM

What exactly is the virtual DOM going to solve.

  • Apply the changed content of the page to the virtual DOM instead of directly to the DOM.
  • When changes are applied to the virtual DOM, the virtual DOM is not in a hurry to render the page, but only adjusts the internal state of the virtual DOM, so that the cost of operating the virtual DOM becomes very light.
  • When enough changes have been collected by the virtual DOM, these changes are applied to the real DOM at once.

image-20230409160201990

This picture is a virtual DOM execution flow chart combined with the React process. Combine this picture to analyze how the virtual DOM works.

  • Create phase . First, a virtual DOM is created based on JSX and basic data, which reflects the real DOM tree structure. Then a real DOM tree is created from the virtual DOM tree. After the real DOM tree is generated, the rendering pipeline is triggered to output the page to the screen.
  • update phase . If the data has changed, then a new virtual DOM tree needs to be created based on the new data; then React compares the two trees, finds out the changes, and updates the changes to the real DOM tree at one time; finally The rendering engine updates the rendering pipeline and generates new pages.

The latest React Fiber update mechanism . As you can see from the above figure, when there is data update, React will generate a new virtual DOM, and then compare the new virtual DOM with the previous virtual DOM. This process will find out the changed nodes, and then apply the changed nodes to the DOM.

Here we focus on the comparison process. At the beginning, the process of comparing two virtual DOMs is performed in a recursive function, and its core algorithm is reconciliation . Normally, this comparison process is executed very quickly, but when the virtual DOM is more complex, the execution of the comparison function may occupy the main thread for a long time, which will cause other tasks to wait and cause the page to freeze. In order to solve this problem, the React team rewrote the reconciliation algorithm. The new algorithm is called Fiber reconciler, and the old algorithm is called Stack reconciler.

Another name for coroutines is Fiber, so here we associate Fiber with coroutines. The so-called Fiber reconciler is to transfer the main thread during the execution of the algorithm, which solves the problem that the Stack reconciler function takes too long. .

double buffer

During game development or other graphics processing, the screen reads data from the front buffer and displays it. However, many graphics operations are very complex and require a lot of calculations. For example, a complete picture may need to be calculated many times to complete. If a part of the image is calculated each time, it is written into the buffer, then there will be a consequence. , that is, in the process of displaying a slightly more complicated image, the page effect you see may be displayed part by part, so when the page is refreshed, the user will feel the flickering of the interface.

Using double buffering, the intermediate results of the calculation can be stored in another buffer first, and after all the calculations are completed and the buffer has stored the complete graphics, the graphics data of the buffer is copied to the display buffer at one time. area, which makes the output of the entire image very stable.

Here, the virtual DOM can be regarded as a buffer of the DOM. Just like the graphic display, it will apply the result to the DOM after completing a complete operation, which can reduce some unnecessary updates and at the same time It can guarantee the stable output of DOM.

MVC pattern

image-20230409160907844

The overall structure of MVC is relatively simple, consisting of models, views and controllers. Its core idea is to separate data from views , that is to say, direct communication between views and models is not allowed, and communication between them is through control device to complete. Usually, the communication path is that the view has changed, and then the controller is notified, and the controller judges whether the model data needs to be updated according to the situation. Of course, according to different communication paths and different implementation methods of controllers, many other modes can be derived based on MVC, such as MVP, MVVM, etc., but they are always the same, and their basic skeletons are all based on MVC. .

Therefore, when analyzing front-end frameworks based on React or Vue, it is necessary to first focus on the large MVC skeleton structure, and then focus on the specific implementation of communication methods and controllers, so that these front-end frameworks can be understood from the perspective of architecture. For example, when analyzing a React project, the React part can be regarded as a view in MVC, and an MVC model structure can be built by combining Redux in the project, as shown in the following figure:

image-20230409161041277

In this diagram, you can think of the virtual DOM as the view portion of MVC, whose controllers and models are provided by Redux. Its specific implementation process is as follows:

  • The controller in the figure is used to monitor the changes of DOM. Once the DOM changes, the controller will notify the model to update the data;
  • After the model data is updated, the controller will notify the view that the model data has changed;
  • After the view receives the update message, it will generate a new virtual DOM according to the data provided by the model;
  • After the new virtual DOM is generated, it needs to be compared with the previous virtual DOM to find out the changed nodes;
  • After comparing the changed nodes, React applies the changed virtual nodes to the DOM, which triggers the update of the DOM nodes;
  • The change of the DOM node will trigger a series of subsequent changes in the rendering pipeline, so as to realize the update of the page.

Progressive Web Apps (PWAs)

Three major evolutionary routes of browsers:

  • The first is application webification;
  • The second is the mobilization of Web applications;
  • The third is Web operating system;

PWA, the full name is Progressive Web App, which translates to progressive web application. Literally, it's "Progressive + Web Apps". It is easy to understand web applications, which are ordinary web pages at present, so what PWA supports is a web page first. As for "progressive", it needs to be understood from the following two aspects.

  • From the perspective of web application developers, PWA provides a gradual transition solution, allowing ordinary sites to gradually transition to web applications. Adopting a gradual approach can reduce the cost of site transformation, enabling the site to gradually support various new technologies instead of completing them in one step.
  • From a technical point of view, PWA technology is also a gradual evolution process. It will evolve a little bit at the technical level, such as gradually providing better device feature support, continuously optimizing smoother animation effects, and continuously making the page loading speed faster. Faster, continuous implementation of native app features.

From these two points, it can be seen that PWA adopts a very moderate and gradual strategy. It is no longer as radical as before, and it always replaces local apps and applets. On the contrary, it is necessary to give full play to the advantages of the Web and gradually shorten the distance with local applications or applets.

The definition of PWA is: it is a set of ideas that gradually enhances the advantages of the Web, and gradually shortens the distance with local applications or small programs through technical means . Technologies based on this concept can be classified as PWA.

Web Apps vs Native Apps

So what are web pages missing compared to native apps?

  • First of all, web applications lack the ability to use offline, and are basically unusable offline or in a weak network environment. What users need is an immersive experience, and being able to use it smoothly offline or in a weak network environment is a basic requirement for users for an application.
  • Secondly, the web application also lacks the ability to push messages, because as an App manufacturer, it needs the ability to send messages to the application.
  • Finally, the web application lacks a first-level entry, which is to install the web application on the desktop, and open the web application directly from the desktop when needed, instead of opening it through a browser every time.

For the above web defects, PWA proposes two solutions: try to solve the problem of offline storage and message push by introducing Service Worker, and solve the problem of first-level entry by introducing manifest.json .

What is Service Worker

The concept of Service Worker, its main idea is to add an interceptor between the page and the network to cache and intercept requests . The overall structure is shown in the figure below:

image-20230409161827066

Before installing Service Worker, WebApp requests resources directly through the network module. After the Service Worker module is installed, when the WebApp requests resources, it will first pass the Service Worker to let it determine whether to return the resources cached by the Service Worker or to request resources from the network again. All control is handled by Service Worker.

Service Worker Design Ideas

architecture

The tasks of JavaScript and the page rendering pipeline are executed on the main thread of the page. If a piece of JavaScript executes for too long, the main thread will be blocked, making the rendering time of one frame longer, which makes the user feel stuck. This is very bad experience for users.

In order to avoid JavaScript occupying too much time on the main thread of the page, the browser implements the Web Worker function. The purpose of Web Worker is to allow JavaScript to run outside the main thread of the page. However, since there is no DOM environment of the current page in Web Worker, only some JavaScript scripts that have nothing to do with DOM can be executed in Web Worker, and the postMessage method Return the result of the execution to the main thread. So in Chrome, Web Worker is actually a new thread opened in the rendering process, and its life cycle is associated with the page.

"Let it run outside the main thread" is a core idea of ​​Service Worker from Web Worker . However, Web Worker is temporary. It will exit every time the JavaScript script is executed, and the execution result cannot be saved. If the same operation is performed next time, it will have to be performed again. So Service Worker needs to add storage function on top of Web Worker.

In addition, since the Service Worker also needs to provide services for multiple pages, it is not yet possible to bind the Service Worker to a single page . In the current Chrome architecture, Service Worker runs in the browser process. Because the browser process life cycle is the longest, it can provide services for all pages during the browser life cycle.

message push

Message push is also implemented based on Service Worker . Because the browser page may not start when the message is pushed, then Service Worker is needed to receive the message pushed by the server and display the message to the user in a certain way.

Safety

Regarding security, the most core one is HTTP. We know that HTTP uses clear text to transmit information, and there are risks of eavesdropping, tampering, and hijacking. Using HTTP to transmit data in a project is undoubtedly "streaking". Therefore, at the beginning of the design, the HTTPS protocol was considered for the Service Worker, because the communication data using HTTPS is encrypted, even if the data is intercepted, the data content cannot be deciphered, and HTTPS also has a verification mechanism, which makes it easy for both parties to communicate Know if data has been tampered with.

WebComponent component development

The global properties of CSS will hinder componentization, and DOM is also a factor that hinders componentization, because there is only one DOM in the page, and DOM can be directly read and modified anywhere. So using JavaScript to achieve componentization is no problem, but once JavaScript encounters CSS and DOM, it is quite difficult to handle.

WebComponent provides a solution, which provides the ability to encapsulate partial views, allowing DOM, CSSOM, and JavaScript to run in a local environment, so that local CSS and DOM will not affect the overall situation.

WebComponent is a combination of technologies, specifically related to Custom elements (custom elements), Shadow DOM (shadow DOM)** ​​and **HTML templates (HTML templates) , for details, please refer to related links on MDN .

<!DOCTYPE html>
<html>
 
 
<body>
    <!--
            一:定义模板
            二:定义内部 CSS 样式
            三:定义 JavaScript 行为
    -->
    <template id="geekbang-t">
        <style>
            p {
      
      
                background-color: brown;
                color: cornsilk
            }
 
 
            div {
      
      
                width: 200px;
                background-color: bisque;
                border: 3px solid chocolate;
                border-radius: 10px;
            }
        </style>
        <div>
            <p>time.geekbang.org</p>
            <p>time1.geekbang.org</p>
        </div>
        <script>
            function foo() {
      
      
                console.log('inner log')
            }
        </script>
    </template>
    <script>
        class GeekBang extends HTMLElement {
      
      
            constructor() {
      
      
                super()
                // 获取组件模板
                const content = document.querySelector('#geekbang-t').content
                // 创建影子 DOM 节点
                const shadowDOM = this.attachShadow({
      
       mode: 'open' })
                // 将模板添加到影子 DOM 上
                shadowDOM.appendChild(content.cloneNode(true))
            }
        }
        customElements.define('geek-bang', GeekBang)
    </script>
 
 
    <geek-bang></geek-bang>
    <div>
        <p>time.geekbang.org</p>
        <p>time1.geekbang.org</p>
    </div>
    <geek-bang></geek-bang>
</body>
 
 
</html>

To use WebComponent, usually to achieve the following three steps.

First, use the template attribute to create a template . The content of the template can be found by using the DOM, but the template element will not be rendered on the page, that is to say, the template node in the DOM tree will not appear in the layout tree, so you can use template to customize some basic element structures , these basic element structures can be reused. After the general template is defined, we also need to define the style information inside the template.

Second, a GeekBang class needs to be created . There are three things to do in the constructor of this class:

  1. Find template content;
  2. Create shadow DOM;
  3. Then add the template to the shadow DOM.

The most difficult thing to understand above is the shadow DOM. In fact, the function of the shadow DOM is to isolate the content in the template from the global DOM and CSS, so that elements and styles can be privatized. The shadow DOM can be regarded as a scope, and its internal styles and elements will not affect the global styles and elements. In the global environment, access to the internal styles or elements of the shadow DOM also needs to be agreed interface.

Through the shadow DOM, the encapsulation of CSS and elements is realized. After creating the class that encapsulates the shadow DOM, you can use customElements.define to customize the elements

This element can be used like a normal HTML element , as in the above code <geek-bang></geek-bang>.

The page finally rendered by the above code is shown in the following figure:

image-20230409163115318

Styles inside the shadow DOM do not affect the global CSSOM. In addition, it is impossible to directly query the internal elements of the shadow DOM by using the DOM interface. For example, you can use it to document.getElementsByTagName('div')find all div elements. At this time, you will find that the elements inside the shadow DOM cannot be found, because to find the elements inside the shadow DOM, you need A dedicated interface, so in this way the internal DOM of the shadow is isolated from the external DOM.

CSS and DOM can be isolated through shadow DOM, but it should be noted that JavaScript scripts in shadow DOM will not be isolated. For example, JavaScript functions defined in shadow DOM can still be accessed externally, because the JavaScript language itself can already be very good componentized.

The role of shadow DOM mainly has the following two points:

  1. Elements in the shadow DOM are invisible to the entire web page;
  2. The CSS of the shadow DOM will not affect the CSSOM of the entire webpage, and the CSS inside the shadow DOM only works on the internal elements.

How Browsers Implement Shadow DOM

image-20230409163357721

This figure is the DOM structure diagram corresponding to the sample code above. It can be seen from the figure that if the geek-bang attribute is used twice, two shadow DOMs will be generated, and each shadow DOM has a shadow root. Root node, you can add styles or elements to be displayed to the root node of the shadow DOM. You can regard each shadow DOM as an independent DOM, which has its own styles and attributes, and internal styles will not affect External styles, and external styles will not affect internal styles.

In order to realize the characteristics of the shadow DOM, the browser has made a lot of conditional judgments inside the code. For example, when searching for elements through the DOM interface, the rendering engine will judge whether the shadow-root element under the geek-bang attribute is a shadow DOM. If is a shadow DOM, then skip the query operation of the shadow-root element directly. Therefore, the internal elements of the shadow DOM cannot be directly queried through the DOM API.

In addition, when generating the layout tree, the rendering engine will also judge whether the shadow-root element under the geek-bang attribute is a shadow DOM. If it is, then when the CSS style is selected for the node of the inner element of the shadow DOM, the shadow will be used directly CSS properties inside the DOM. So the final rendered effect is the style defined inside the shadow DOM.

M was quarantined.

CSS and DOM can be isolated through shadow DOM, but it should be noted that JavaScript scripts in shadow DOM will not be isolated. For example, JavaScript functions defined in shadow DOM can still be accessed externally, because the JavaScript language itself can already be very good componentized.

The role of shadow DOM mainly has the following two points:

  1. Elements in the shadow DOM are invisible to the entire web page;
  2. The CSS of the shadow DOM will not affect the CSSOM of the entire webpage, and the CSS inside the shadow DOM only works on the internal elements.

How Browsers Implement Shadow DOM

[External link image transfer...(img-v0Z6lFai-1683724058047)]

This figure is the DOM structure diagram corresponding to the sample code above. It can be seen from the figure that if the geek-bang attribute is used twice, two shadow DOMs will be generated, and each shadow DOM has a shadow root. Root node, you can add styles or elements to be displayed to the root node of the shadow DOM. You can regard each shadow DOM as an independent DOM, which has its own styles and attributes, and internal styles will not affect External styles, and external styles will not affect internal styles.

In order to realize the characteristics of the shadow DOM, the browser has made a lot of conditional judgments inside the code. For example, when searching for elements through the DOM interface, the rendering engine will judge whether the shadow-root element under the geek-bang attribute is a shadow DOM. If is a shadow DOM, then skip the query operation of the shadow-root element directly. Therefore, the internal elements of the shadow DOM cannot be directly queried through the DOM API.

In addition, when generating the layout tree, the rendering engine will also judge whether the shadow-root element under the geek-bang attribute is a shadow DOM. If it is, then when the CSS style is selected for the node of the inner element of the shadow DOM, the shadow will be used directly CSS properties inside the DOM. So the final rendered effect is the style defined inside the shadow DOM.

Guess you like

Origin blog.csdn.net/weixin_46488959/article/details/130609956