Talking about front-end performance optimization

    When the first company was doing the front-end, the leader told Mengxin me that the most important thing in front-end development was to get the performance optimization of the page and table controls. For the form control, the dataTable of bootstrap was used at the beginning, and the source code was changed day and night due to business needs.

    But today I want to talk to you not about bootstrap's dataTable, but about front-end performance optimization. After working for a few years, I also have some superficial understanding of performance optimization. Now we will start our today's theme: Talking about front-end performance optimization.

One: reduce http requests to the server

    Regular http requests are short connections in the form of "request-response-disconnect". For each independent resource in the project, we will send a get request to the server, and then wait for the server to return the resource to us. Each "request-response-disconnection" here will consume a certain amount of time. If the number of http requests can be reduced as much as possible, it means that for ourselves, the burden on the server is reduced. For users, we speed up the page loading time and improve the user experience.

How to reduce http requests? We have the following methods:

    1. Use css sprite (Sprite image) technology to merge multiple images into a single image file, and use background-position to locate the background position.

This method generally processes small pictures used in some pages, but with the widespread use of base64 encoded display pictures and font icons, many colleagues or some documents will express some views that the sprite picture is outdated, here I am Do not evaluate. The best is the one that suit for you.

    2. Merge multiple CSS style files into a single style file, merge multiple scripts into a single script, and then reference the merged style/script file in the page.

This idea is consistent with the idea of ​​css sprite, the purpose is to turn the original multiple http requests into one http request. The specific implementation method can use popular construction tools such as webpack, gulp, etc. We can even merge CSS into js files through construction tools. It should be noted that if all styles/scripts are combined, common styles/scripts such as the header or bottom of multiple pages cannot be cached on the client side, and you have to weigh the pros and cons and make corresponding changes.

    3. Use base64 encoding to display pictures

You can find various tools on the Internet to convert images into base64 encoded strings, and you can also use build tools to automatically convert them for you. The only disadvantage is that although they do not need to be fetched from the server, they cannot be cached on the client side, causing the user to re-render each time the page is accessed. Moreover, the lengthy base64 encoding string will also occupy a lot of code space on your page, and the code maintenance is also very difficult. It is only recommended that you use this method on pages with less repeated visits by users.

    4. Write small and low-reusability styles/scripts directly on the page.

We don't need to put every style/script in an external file. For example, small segments with low reuse rate can be directly written on the html page, reducing http requests referencing external files. This is the method I found in a blog post I saw before. The blogger suggested that it would be a good choice to write this type of style/script in html and then compress it with gzip. Seeing this, I have some doubts. We know about compressing CSS and JS, but we have rarely seen compressing HTML. With a cautious attitude, I searched for a wave, and still found some good points, which I would like to share with you here.

Compression of CSS and JavaScript is well established and used by major websites. Compression of HTML (specifically removing whitespace characters and comments), except for search pages such as Google, is basically invisible on other web pages.

The reason for not compressing html is simple: 
in an HTML document, multiple whitespace characters are equivalent to a single whitespace character. That is to say, it is not safe to delete whitespace characters such as line breaks, which may cause differences in the styles of some elements. 
In the HTML element, there is a pre, which means formatted text. Any white space inside cannot be deleted. 
There may be IE conditional comments in the HTML. These conditional comments are part of the documentation logic and cannot be removed. 
For dynamic pages, HTML compression may also increase the CPU load of the server, which is not worth the loss. 

    5. Add an identifier suffix to the resource file. We can add a hash code suffix to the resource file through the build tool. In the case that these resource files have not been updated, the client will directly read the files in the cache. Only when the hash code suffix of the resource file in the server is inconsistent with the suffix in the client cache, the client will send a message to the server. Request to download the file again.

2. Reduce the file size

    In addition to the number of http requests, the factors that affect page performance are also related to the size of the requested resource file. A most intuitive example: can downloading 10M be the same as downloading 10k pictures? Although a bit exaggerated, it is very important to reduce the size of the request file as much as possible. Things we can do are:

1. Compress style/script files. We can also use build tools such as webpack or gulp to achieve this, which can reduce the size of css/js files well;

2. Select the picture format in a targeted manner. Under the requirement of no transparent background, only use gif format for pictures with a single color and no color gradient. For jpg pictures, you can also select the corresponding " Quality" is optimized. Or find some tools on the Internet to compress the image accordingly and reduce its volume appropriately.

3. Use font ui icons to replace some small pictures. General front-end frameworks such as bootstrap, amazeUi, mui, etc. all provide font icon libraries. If the icons provided by the framework cannot meet the project requirements, please move to the Alibaba vector icon library.

3. Use CDN

Using a CDN has several benefits:

1. If the user has downloaded this CDN resource on other sites, then come to our site and only get it from the client cache; reduce the file request to the server of your own site (in the case of external CDN), and reduce the burden on the server;

2. Multiple domains will increase the maximum number of resources that the browser allows to download asynchronously. For example, a site only requests resources from one domain, then FireFox only allows up to 2 files to be downloaded asynchronously at the same time, but if an external CDN is used to import resources, then FF allows to download two resources in this domain asynchronously at the same time, and also allows to download two resources in another domain (CDN) asynchronously at the same time.

However, there is a big problem with using CDN - it increases the cost of dns parsing. If a page introduces multiple CDN resources at the same time, it may fall into more waiting time due to dns parsing, resulting in more losses than gains.

DNS resolution: People are used to memorizing domain names , but machines only recognize each other’s IP addresses. There is a many-to-one relationship between domain names and IP addresses. An IP address does not necessarily correspond to only one domain name, and a domain name can only correspond to one IP address. The conversion between them is called domain name resolution. Domain name resolution needs to be completed by a dedicated domain name resolution server, and the whole process is automatic.

For this problem, it is generally recommended to use only the same reliable and fast CDN to introduce various required resources under a site, that is to say, it is recommended that a page be imported from 2 different domains (such as site domain and CDN domain) Requesting resources is the best option.

4. Delay request, load script asynchronously

Usually, our script files are downloaded asynchronously like other resource files, but there is a problem here - for example, there will be "execution blocking" for a short period of time after FireFox downloads the script, that is to say During the period when the browser downloads the script and executes it, other behaviors of the browser are blocked, so that other resources on the page cannot be requested and downloaded.

If the execution time of js code in your page is too long, the user will obviously feel the delay of the page. There is a simple way to solve this problem - place the script request tag before the </body> closing tag, so that the script on the page becomes the last requested resource, and naturally it will not block the request events of other resources.

In addition, although it is mentioned above that "our script files are downloaded asynchronously like other resource files", asynchronous download does not mean asynchronous execution. Execute scripts in the order in which they are requested. Then the problem comes - if the script dependencies on the page are not large, or even have no dependencies on each other, then this set of rules of the browser will only increase the blocking time of page requests.

The solution to this problem is nothing more than to let the script execute asynchronously without blocking, such as adding defer and async attributes to the script tag or injecting the script dynamically, but these are not good solutions, either there are compatibility problems, or it is too cumbersome. Handling dependencies.

It is recommended to use requireJS (AMD specification) or seaJS (CMD specification) to asynchronously load scripts and process module dependencies, the former will "dependency preload" (preload all dependent script modules, the execution speed is the fastest), the latter "go" Dependency is nearby" (lazy loading of dependent script modules, request scripts are more scientific), you can choose the most suitable one according to the specific needs of the project.

5. Delay requesting files outside the fold

The so-called "above the fold" refers to the area that the user first sees when the page is loaded. For example, like Jingdong Taobao, for the image content that needs to scroll the page to see, it has done similar lazyload processing, but it does give the user an illusion - the page is loaded faster, because I quickly see the screen on the screen. content (even though I haven't pulled down the scroll bar and the files at the back of the page haven't actually loaded yet).

6. Optimize the arrangement order of page modules

For example, there is a page like this: the left side is the sidebar, which is used to store the user's avatar, information, and advertisements on the website, and the right side is the article content area, then our code structure is very likely Is such that:

As a result, the browser loads the sidebar first, and then our article, from top to bottom, according to its UI single-threaded guidelines. . .

Obviously, this is not a humanized loading order. We have to figure out which of the modules on the page is the most important to the user and should be displayed first .

For the above example, the content of the article should be the area that the user first sees and needs the browser to request and display first. So we have to modify our code to:

7. Other suggestions

1. Don't use @import in css , it will make a style file wait for another style file's request, which will increase the waiting time of the page;

2. Avoid page or page file redirection search;

3. Reduce invalid requests - such as requesting a non-existing resource through css/js, which may lead to long waiting and blocking (until it returns an error message);

4. Whether you decide to put the script at the footer or not, make sure that the script is placed after the style file;

5. When the file is less than 50K, reading the file stream directly will be faster than reading the file from the file system, and the opposite is the case if it is greater than 50K. For example, if there is a picture, if it is less than 50K, we can convert it into binary data and store it in the database. If the page wants to read the picture, read it from the database. If the file size is greater than 50K, it is recommended to store it in an accessible It can be read in the form of a file in the folder;

6. When the browser requests a static resource from the server, it will first send cookies under the same domain name, and the server will not do anything with these cookies. So they're just pointlessly consuming bandwidth. So you should make sure that requests for static content are cookie-free requests. So use a separate domain name to access image resources to reduce cookie transmission and improve web page performance.

This time the performance optimization is discussed here for the time being. This article quotes some bloggers’ blog posts and also contains some of my personal thoughts. I think front-end performance optimization should be far more than that. If you have other ideas, please let me know, thank you very much.

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325332295&siteId=291194637