WEB / H5 performance optimization summary

Let ’s talk about the optimization of front-end graphics rendering today, because I may start to study webgl in the next time, so I will summarize the H5 I have done before, and it is now published in GERRY_BLOG, TiMiGerry Reprint please keep the link.

Static resources-pictures

1. Picture format
JPEG: First of all, the whole process of JPEG compress is to convert the color rgba () of the picture, and then re-sampling to distinguish the high-frequency and low-frequency color transformation, so as to carry out a DTA process, and then the high-frequency color Transform the sampling results for a compression, followed by quantization and encoding, and finally get a compressed version of JPEG. This compressed version of the picture is different from the original data picture. Although some data is lost during the compression process, these differences are unrecognizable to the human eye. Therefore, after compression, the overall browsing experience effect is not affected. At the same time, for the page, the capacity of the static resource picture can also be reduced a lot, thereby improving the loading speed of the web page.

PNG: PNG image is a picture format that supports transparency, and its essence is a color index data collection. It has three formats: PNG8, PNG24, and PNG32, namely 8-bit, 24-bit, and 32-bit indexes. The PNG file format has a color palette inside. Take PNG8 as an example: PNG is a 256 color + transparent function format. There are 256 colors in his color palette, that is, the color of a pixel. He needs 8bit data length to index That is to say, the color of the PNG8 image only appears in these 256 colors, so the PNG8 color is not so rich, there are disadvantages and advantages, and its file size is also the smallest one in the PNG file format.
The PNG24 image needs 2 ^ 24 colors, that is, the color of a pixel needs 24bit to index, so the data length required for png24 to index a color is 3 times that of png8, and it does not support transparency. The png32 image is in png24 On the basis of adding a transparent function, the selection of PNG pictures depends on the color of the picture. If the picture color is not very rich and relatively simple, you can consider using PNG8 pictures, if the picture color is very rich, you can choose PNG24 or PNG32-bit pictures to reduce the size of picture resources. There are some slight differences in each format of PNG pictures. In actual development, you need to balance the importance of file size, picture format, picture quality, and picture size in the current project before deciding which picture format to use.

JPEG: The compression rate of the picture is relatively high, suitable for use as a background picture, and the case of the head picture is suitable for the case of a large area background.
PNG: The format supports transparency. Pictures in this format have good compatibility. They are used for some backgrounds or pop-up layers that need to be transparent, or in the following cases, you need to pursue experience quality and use PNG pictures to develop the entire page.
SVG: The other is SVG vector graphics. The biggest advantage of this format is that it can be enlarged and reduced without distortion and fineness. At the same time, the file is relatively small and the image format is embedded in the code. Pictures, of course, this can only be used for simple business scenarios such as icons, buttons, etc.

Second, the image processing
CSS sprite: At present, spite is still a more commonly used method of image sorting. His advantage is to merge large and small images into one large image, and then use image positioning to display the corresponding image, which can reduce the page Request to increase page loading speed. But there is also a disadvantage that since a large picture is synthesized, many small pictures rely on this picture. If this picture is not loaded, then the entire page is basically missing, but with the current network, basically it can also be Ignore this problem, basically 4G network or wifi is not slow.
Image-inline: Using BASE64 format to embed in the page is also a good way to reduce htttp requests, but it is generally less done in actual development, because embedding pictures in HTML is actually not easy to maintain afterwards. According to my development experience, the BASE64 image format is generally used when there is no way to do it.
For example, in development projects, picture resources are generally placed in addresses in different domains. In the case of using CANVAS to generate pictures, canvas.toDataUrl (...) will pollute the original address of the picture, resulting in cross-domain problems. When the picture is generated separately, using BASE64 is the simplest and most rude solution. Embedding pictures in html solves the cross-domain situation.
Compression: Put pictures on some tools for batch compression.

HTML page loading and rendering

1. The process of web page rendering

image description

In the process of loading the webpage, the first thing you get is an HTML text, or you can say that you get a string of strings. The browser parse parser will perform a series of lexical analysis on this string, and generate each tag corresponding A token or an object corresponding to each tag, then parse these tokens from top to bottom, and then will generate corresponding DOM nodes from top to bottom step by step.
Of course, in the process of lexical analysis, the link script tag can be parsed, and the corresponding web resource will be requested to load. JavsScript will be executed by the V8 engine of the browser core, and css is similar to html. He will be parsed into CSSOM, and then HTML, CSSM, SCRIPT, combined after parsing, to generate these basic information obtained by Rander Tree, then Enter the layout is the layout, and finally render Paint.

Second, HTML loading features
Sequential loading, concurrent loading:
Sequential loading refers to the aforementioned lexical analysis, that is, when the browser parses the HTML page from top to bottom, it is executed in sequence.
Concurrent loading refers to the fact that static resources in the same domain will initiate requests at the same time, that is, concurrent requests. Of course, there are concurrent requests, and the server also has an upper limit for concurrent requests. For example, Google Chrome requests the number of concurrent resources in a unified domain. 6. When we need to request a large number of pictures, we need to use lazy loading or preloading to operate.

image description

CSS blocking and JS blocking
css should be written in the head as much as possible, because css loading will block the loading of the page, which is beneficial. This avoids the situation that the page flashes when the css is not loaded when the page is loaded. , CSS loading will block the execution of JS, but does not block the loading of imported JS.
js should be written at the bottom of the HTML text as much as possible, because the introduction of js will block the rendering of the page, and also depends on the DOM node. Therefore, you should first load HTML and CSS first, and then load JS, JS loading, of course, without affecting the initial screen, you can also use asynchronous loading defer, async, to load JS files that are not immediately needed, defer When loading, based on the completion of the DOM, load and execute sequentially, and whether async is loaded sequentially, whoever loads first and then executes, this method needs to pay attention to whether JS is dependent, and the order of execution of JS is also dependent on each other. Block the subsequent execution of JS logic, so you must arrange them in order.
In addition to defer and acync, there is direct use of dynamic loading of js. Generally, this method will be used in the case of components, encapsulating a component and then dynamically loading JS and CSS using js.

Lazyload & Preload

Lazyload is used to load a large number of pictures but can determine the number of loads according to the user's operation, the purpose is to reduce the request to the server and reduce the waste of network traffic, while also improving the user experience. For example, some e-commerce pages display products, and load the corresponding data in the place where the browser scrolls, instead of listing all the data in one go. It is also very common to pull down and refresh in the H5 page, and pull up to load. Of course, due to the browser characteristics of IOS itself, some corresponding processing is also required.

Preload is used in some situations that need to pay attention to user experience and smooth page interaction. After the page is loaded, all the data is loaded first, and then the page is opened. The most common way is to use the loading progress bar, first store all the static resources in an array, and then load the calculated percentage in turn, and then go to the next step after reaching 100%.

Redraw and reflow

Let's talk about the concept of a frame. At present, the refresh rate of most device screens is 60 times / second, that is, 1000/60 = 1.6ms is a frame. The browser needs to do any rendering, so his rendering time must be less than 1.6ms or as close as possible to 1.6ms, otherwise, there will be a phenomenon of stuttering, affecting the user experience. Assuming that the time for the browser to render an animation is exactly one frame, then the frame of this frame will first recalculate the style (css / dom, etc.) and then reflow, update the tree, then repaint (painting), and finally Perform layer merging (Composite). As shown below

image description

1. Redrawing and reflowing: The
most critical aspect of front-end performance optimization is to reduce page redrawing and reflowing.
Reflow (reflow) is the mechanism that triggers the reflow when the layout and geometric properties of the current page change.
Repaint (repaint) is that some properties of the render tree itself have been updated, but it does not affect the overall layout, but only changes the background, color, etc. This is called repaint.
Second, optimization:
reduce redrawing and reflow to
avoid using some attributes that will trigger reflow, some attributes will trigger the mechanism of reflow, such as: top, height and other layout-related attributes, for example: @keyframes animation method of displacement translateX replaces top, the following picture is an example: Obviously, there is one step less layout, this is because the top attribute that triggers reflow is replaced with translate, which reduces the layout step in the rendering process and reduces the rendering time to improve performance.

image description

image description

Obviously, there is one less layout, because the top attribute that triggers reflow is replaced with translate, which reduces the layout step in the rendering process and reduces the rendering time to improve performance.

Independently render layers frequently, and take out the block that needs frequent reflow and redraw as a separate layer, so that the browser's reflow redraw range is reduced, thereby reducing CPU resource consumption. Because, the process of browser rendering is this: the
DOM is now divided into multiple layers;
then each layer is rasterized, and the nodes are drawn into the graph;
then the layer is uploaded to the GPU as a texture; and
finally the graph Reorganization of layers, as long as we independently redraw and reflow the layer that needs to be operated, it will not affect other layers.
According to the above rendering process, we will talk about the concept of GPU acceleration. Since we created a new compositing layer, which is actually GPU acceleration, there are several methods for creating new layers:
3D or perspective conversion
Use the video element for accelerated video decoding;
the 2D context canvas element with a 3D (WelGL) context or accelerator;
the element to do CSS operation or webkit conversion for your own opactiy; the element
with accelerated css filtering; the
element A has a z Element B with a smaller index than itself, and element B is a composition layer (in other words, the element is rendered on the composite layer), then element A will be promoted to a composition layer;

Take point 2 as an example: Open the live video of the League of Legends game:

image description

We can see why the video here becomes a layer, here is an explanation.

Let me mention the 7th point, because in the actual development project, especially the mobile terminal will often encounter problems when doing some animation effects.

image description

According to the above picture, element B should be on a separate composition layer, and the final image of the screen should be composed on the GPU. But the A element is on top of the B element. We did not specify the level of the A and B elements. Then the browser will force a new composition layer for element A at this time, so that both A and B are turned into separate composition layers. Therefore, when using GPU acceleration to improve animation performance, it is best to add a higher z-index attribute to the current animation element to artificially interfere with the ordering of the composite layers, which can effectively reduce the unnecessary composite layers created by Chrome and improve rendering performance.
When creating a new layer, pay attention: the GPU not only needs to send the rendered layer images to the GPU, but also needs to store them for later reuse in the animation. You cannot create layers at will, you must analyze the current project. Because creating a new layer comes at a price, each new rendering layer created means new memory allocation and more complex layer management. It is a big burden for users using mobile devices.

Browser storage

1. Storage medium
Cookie: Cookies are generally used to store account verification information or some more sensitive user data, or when the cooperation page of some projects in the mobile terminal needs to obtain login status information, you can use a transit page Cookies to store the corresponding data for easy access. In general, it is used for CS interaction and data storage. Because, his delivery method is first generated from the server, and then the browser writes the data to the local set-cookie in the header of the returned data from the server, and then every http request (under the same domain name) will carry the cookie information, So that the server can perform the requested user authentication

This is a very efficient interaction mechanism, but it also brings some problems. Since cookies are brought every time, it means that if the number of requests is large, it will consume traffic, which will cause slow loading and waste of resources. Resources can be resolved with cdn to separate the domain name of the main site and the resource site. Of course, this is also based on the case of a large number of web pages. If the PV of a web page is less than 100,000, then this is actually the case with today's network It can be ignored. Speaking of which, it reminds me when I went to interview with some small companies in the past. When I asked about their company ’s web performance optimization, those technical leaders basically said, "If the traffic has not reached more than 100,000 , You can see the normal experience of the interface, how convenient it is. "Everyone laughed. However, as a developer, we must proceed from a technical point of view, regardless of the size of the project, as best as possible.

localStrage & sessionStrage: Compared to cookies, these two attributes are newly developed by H5 for storing data. The capacity can be up to 5M. The only difference is that the data remains after closing and the other is clearing the data after the browser is closed. Can be used as some temporary data storage, such as form or shopping cart data.
IndexDB: This browser's API is a browser database. It is only needed when a large amount of structured data needs to be stored. Currently, this API is rarely used because the client does not store particularly large amounts of data. The data is basically handed to the background, and the data that the front end basically needs to store is basically temporary data and verification data. Another indexDB is to create the corresponding offline application.
Server Worker: This is used when you need to obtain a JS file with a large volume and a large amount of calculation. In the case of 3D rendering, the js file has a large volume and a large amount of calculation, and the js is single-threaded. Implementation. This may cause a stuck situation. The last js is not processed, the next js has to wait, SW is independent of the current WEB, different JS can be processed in the background, the main page is monitored and finally summarized. The following is the life cycle of SW:

image description

PWA: Progressive web app refers to a new type of app model that achieves the best user experience through a series of new web features and UI design. This is also the trend of WEB APP in the future. To put it bluntly, he will try to be as close as possible to the experience of the native APP, for example, his three main directions. First, you can open the APP and use it without a network. The second is to increase the corresponding speed to achieve the best experience effect. The other is to generate the desktop clickable due, which is the same as the ordinary APP, and the full-screen and push functions are accessed by clicking the APP.

Browser cache

A good caching strategy can reduce the delay of http requests and web pages, reduce unnecessary data loading, and reduce network load, thereby improving the response speed of the page and allowing users to have a better browsing experience. However, caching can only improve the response speed of opening the page for the second time. The first time the page is opened depends on the current network environment and device. The browser's cache is to save the file on the client. When each session, the browser will check whether the cached copy is still within the validity period. If it is, the browser will no longer request the file from the server, but will directly obtain and use it in memory. If the file has expired, then the browser will initiate a request to the server. This can reduce unnecessary requests and speed up the response of the page.

The web cache information will be stored in httpheader, and some caching strategies can be configured through some attributes in httpheader. Through this strategy, it is determined whether the resource needs to request the server to load again. It can exist in the responseheader or in the requestheader, the purpose is to let the client and server know each other's cache.

Cache-control is the httpheader that controls the caching strategy. There are: max-age, s-maxage, private, public, no-cache, no-store. These attributes are used to configure a cache and form a cache strategy.
max-age: max-ago refers to the maximum effective time, that is, the resource is within this time range from the time of the current request, there is no need to initiate a resource request to the server, the browser directly obtains the memory file, and we Open the official website of King Glory:

image description

image description

Seeing the logo here, the max-age of Cache-control is 86400 seconds, which translates to 86400/3600 = 24, which means that within one day of this logo, accessing this web page will not initiate a resource request to the server, even if the server The logo of Zhang has changed. You can see from memory cache in the picture, that is, get it from memory.
s-maxage: s-maxage is similar to max-age, both do not initiate resource requests to the server within a specified time, but there is a difference, s-maxage points to a shared cache (will be explained later), For example: cdn, and when maxage and s-maxage are both set in a Cacha-control, s-maxage will override maxage and Expires.

private and public: private refers to the private cache, that is, the cache that can only be accessed by the user, and public refers to the shared cache that can be accessed by multiple browsers. If private or public is not specified, the public is the default. Another thing to note is that s-maxage must be set to public to take effect.
no-cache: It means that every time the server initiates a request to verify whether the cache is expired and invalid, instead of maxage, it will not initiate a resource request to the server within a period. Pay attention to the usage of no-cache, you can set maxage to 0, and set the attribute to private:
Cache-control: private, maxage: 0, no-cache

No-store refers to the prohibition of caching, which requires resource requests every time it is loaded.
Expires: Expires is used to set the cache expiration time. Like max-age, it is specified within a certain time. As long as the cache takes effect, it will not request resources from the server. However, the priority of max-age is higher. It is used for expires, and it needs to be used with last-modified, because expires is a strong cache, whether he initiates a request to the server within a specified time, regardless of whether the file is updated on the server side. Another point is that Expires appeared relatively early, so he has an advantage in browser compatibility.

Last-modified & if-last-modified: last-modified & if-last-modified refers to the last modification time of the file, which is based on the cache negotiation mechanism between the client and the server. in

image description

We see that there is a time in the response-header last-modified, this is the time of the last modification time of this file on the server, the browser will save this time, when the next request, the request-header if-modified- Since there will be this time, tell the server this file, the last update is this time. If, at this time, the file on the server has changed, he will reload and return status code 200. If the resource on the server has not changed, the browser will directly get the cache and return 304.

Etag and if-none-Match: The server generates a hash value based on the content of the file to identify the status of the resource. When the request is made to the server for the second time, the server will verify whether the hash is consistent to determine whether the file has occurred Changes, what problem can he solve? Only the last-modified case will have the following defects: the
server file has changed, but the content has not changed; the
server cannot accurately obtain the last modification time of the
resource ; the resource has been operated within seconds, last-modified is not recognized;

Etag is based on content, no matter what operation, as long as the content changes, the hash value must change. The other is that etag has a higher priority than last-modified. One more thing to add: Last-modified & ETag is only used when the browser is verified again. He must first determine the cache expiration (max-age) before using these two things, of course ETag takes precedence The level is higher than Last-modifity.

Cache strategy customization:
I have classified cache strategies into two categories. One is the cache strategy for static resources, but the one for dynamic resources. There may be new methods in the future, and I will write it out in due course. First note that the cache should be divided into shared or private, first to avoid being cached by the proxy, and second, to develop a good habit of paying attention to code specifications.

Static resources: Static resources refer to files such as css, javascript, txt, and pictures that will not be modified. For files such as css, javascript, we will specify the version number when packaging, that is, there is a name with a suffix, once the change occurs, the entire file will be updated. Therefore, for static resources, the caching strategy is relatively simple. Just make some appropriate modifications based on the current project.

Dynamic resources: Dynamic resources, such as the price information of stocks, futures, etc. Here, the resources are shared resources. The browser or proxy server will check whether there is the latest version every time there are them. Then, we It can be set as follows:
Cache-control: public, no-cache, no-store
can save some data for a period of time, then max-ago = ... (seconds), it can be converted according to need, for example: cache Valid for one hour
Cache-control: public, max-age = 86400
One hour later, you need to strictly control the cache, you can use it again:
Cache-control: public, max-age = 86400, no-cache or must-revalidate

In fact, it is also based on demand, setting is nothing more than one command.

Vary: Accept-Encoding This is for resources that are enabled with gzip compression and cached by the proxy server. If the client does not support compression, then the correct data may not be obtained in this case, so the proxy server may appear two One version of the resource, one is compressed and the other is uncompressed. Another reason is Internet Explorer, Internet Explorer does not support any resource with Very header, but the value is not Accept-Encoding and user-Agent

Summary: The optimization plan of the page needs to be adjusted according to the needs of the current project to achieve the best actual experience.

Guess you like

Origin www.cnblogs.com/homehtml/p/12755937.html