Summary of web front-end performance optimization

Original address

Websites are generally divided into two: front-end and back-end. We can understand that the background is used to realize the functions of the website, such as: realizing user registration, users can comment on articles and so on. And what about the front end? In fact, it should be a function of the performance. And most of the impact on the user's access experience comes from the front-end page.

        And what is the purpose of our website? Isn't it just to get the target group to visit? So we can understand that the front end is the real contact with the user. In addition to the need for performance optimization in the background, in fact, the front-end pages need to work hard on performance optimization. Only in this way can we bring a better user experience to our users. It's like, many people ask if men only look at their appearance when they are looking for a girlfriend, and some wise men give this answer: face and body determine whether I want to understand her thoughts, and thoughts determine whether I Will vote against her face and body. In the same way, the same is true for websites. The user experience of the front-end of the website determines whether users want to use the functions of the website, and the functions of the website determine whether users will vote down the front-end experience.

        Not only that, if the front-end is well optimized, he can not only save costs for the enterprise, but he can also bring more users to users because of the enhanced user experience. Having said so much, how should we optimize the performance of our front-end pages?

        Generally speaking, the web front end refers to the part before the business logic of the website, including browser loading, website view model, image service, CDN service, etc. The main optimization methods include browser access, use of reverse proxy, CDN, etc.

Browser access optimization

The browser request processing flow is as follows:


1. Reduce http requests and set HTTP cache reasonably

        The HTTP protocol is a stateless application layer protocol, which means that each HTTP request needs to establish a communication link and perform data transmission, and on the server side, each HTTP needs to start an independent thread for processing. These communication and service overheads are expensive, and reducing the number of http requests can effectively improve access performance.

        The main means of reducing http is to merge CSS, merge javascript, and merge images. Combine the javascript and CSS required for a browser visit into one file, so that the browser only needs one request. Pictures can also be combined, and multiple pictures are combined into one. If each picture has different hyperlinks, it can respond to mouse clicks through CSS offset to construct different URLs.
         The power of caching is powerful, and proper caching settings can greatly reduce HTTP requests. Suppose the home page of a website, when the browser has no cache, will send a total of 78 requests, a total of more than 600 K of data, and when the second visit, that is, after the browser has cached, there are only 10 requests, a total of more than 20 K data. (It should be noted here that the effect is different if the page is refreshed directly by F5. In this case, the number of requests is still the same, but the requesting server for the cached resource is a 304 response, only the Header has no Body, which can save bandwidth)

        What is a reasonable setting? The principle is very simple, the more you can cache, the better, and the longer you can cache, the better. For example, image resources that rarely change can directly set a long expiration header through Expires in the HTTP Header; resources that change infrequently but may change can use Last-Modifed for request verification. Keep resources in the cache as long as possible. The specific settings and principles of HTTP caching will not be described in detail here.

2. Use browser cache

        For a website, static resource files such as CSS, javascript, logo, and icons are updated less frequently, and these files are required for almost every http request. If these files are cached in the browser, it can be extremely Good to improve performance. By setting the attributes of cache-control and expires in the http header, the browser cache can be set, and the cache time can be several days or even several months.

        At some point, the changes of static resource files need to be applied to the client browser in time. In this case, it can be achieved by changing the file name, that is, updating the javascript file is not updating the content of the javascript file, but generating a new JS file and updating it. References in HTML files.
        When updating static resources, websites that use the browser caching strategy should adopt a method of incremental update. For example, if 10 icon files need to be updated, it is not appropriate to update all 10 files at a time, but should be updated one file at a time. A certain interval is required to avoid a large number of cache failures in the user's browser, and the cache is updated in a centralized manner, resulting in a sudden increase in server load and network congestion.

3. Enable compression

        Compressing files on the server side and decompressing them on the browser side can effectively reduce the amount of data transmitted by communication. If possible, combine external scripts and styles as much as possible, and combine multiple into one. The compression efficiency of text files can reach more than 80%, so HTML, CSS, and javascript files can be compressed with GZip to achieve better results. However, compression puts a certain pressure on the server and the browser, and a trade-off should be considered when the communication bandwidth is good but the server resources are insufficient.

4、CSS Sprites

Another great way to reduce the number of requests by merging CSS images.

5、LazyLoad Images

        This strategy doesn't actually reduce the number of HTTP requests, but it can reduce the number of HTTP requests under certain conditions or when the page is just loaded. For pictures, only the first screen can be loaded when the page is just loaded, and subsequent pictures can be loaded when the user continues to scroll backwards. In this way, if the user is only interested in the content of the first screen, the remaining picture requests are saved.

6. CSS is placed at the top of the page, and javascript is placed at the bottom of the page

       The browser will render the entire page after downloading all the CSS, so the best practice is to put the CSS at the top of the page and let the browser download the CSS as soon as possible. If the CSS is placed in other places such as BODY, the browser may start rendering the page before downloading and parsing the CSS, which will cause the page to jump from the CSS-free state to the CSS state, and the user experience will be poor, so Consider putting CSS in HEAD.

        Javascript is the opposite. The browser executes the javascript immediately after loading it, which may block the entire page and cause the page to display slowly. Therefore, the javascript is best placed at the bottom of the page. But if you need to use javascript when the page is parsed, it is not suitable to put it at the bottom.

        Lazy Load Javascript (Load only when it needs to be loaded, generally does not load information content.) With the popularity of Javascript frameworks, more and more sites also use frameworks. However, a framework often includes a lot of functional implementations, which are not required by every page. If you download unnecessary scripts, it is a waste of resources - both bandwidth and execution time are wasted. . There are two current approaches, one is to customize a dedicated mini-version framework for those pages with particularly large traffic, and the other is Lazy Load.

7. Asynchronously request Callback (that is, extract some behavior styles and slowly load the content of the information)

There may be such a requirement in some pages that the script tag needs to be used to request data asynchronously. similar:

[javascript]  view plain copy  
  1. <span style="font-size:14px;">/*Callback 函数*/  
  2.     function myCallback(info){   
  3.         //do something here   
  4.     }   
  5.  HTML:  
  6.   Content returned by Callback :  
  7.    myCallback('Hello world!');  
  8. </span>  

Writing directly on the page in the above way  <script> also has an impact on the performance of the page, that is, it increases the load of the page for the first time, and delays the trigger timing of the DOMLoaded and window.onload events. If the timeliness allows, consider loading when the DOMLoaded event is triggered, or use the setTimeout method to flexibly control the timing of loading.

8. Reduce cookie transmission

        On the one hand, cookies are included in each request and response. A cookie that is too large will seriously affect data transmission. Therefore, which data needs to be written into the cookie needs to be carefully considered, and the amount of data transmitted in the cookie should be minimized. On the other hand, for access to some static resources, such as CSS, script, etc., it is meaningless to send cookies. You can consider using an independent domain name to access static resources to avoid sending cookies when requesting static resources and reduce the number of cookie transmissions.

9. Javascript code optimization


a.HTML Collection (HTML collector, returns an array of content information) 
  in the script document.images, document.forms, getElementsByTagName() all return collections of type HTMLCollection, which are mostly used as arrays in normal use to use, because it has a length property, you can also use the index to access each element. However, the access performance is much worse than that of the array, because the collection is not a static result, it only represents a specific query, and each time the collection is accessed, the query will be re-executed to update the query result. The so-called "accessing the collection" includes reading the length property of the collection and accessing the elements in the collection. 
  Therefore, when you need to traverse the HTML Collection, try to convert it to an array before accessing it to improve performance. Even if it is not converted to an array, please visit it as little as possible. For example, when traversing, you can save the length property and members to local variables and then use local variables.   
b. Reflow & Repaint   
  In addition to the above point, DOM operations also need to consider the browser's Reflow and Repaint, because these are resource-consuming.

(2). Use with caution 

with(obj){ p = 1}; The behavior of the code block is actually to modify the execution environment in the code block, putting obj at the front of its scope chain, and accessing non-local variables in the with code block is all It is to start the search from obj first, if not, then search up the scope chain in turn, so using with is equivalent to increasing the length of the scope chain. Each time the scope chain is searched, it takes time, and an excessively long scope chain will reduce the search performance. 
  Therefore, unless you can be sure that only the properties in obj are accessed in the with code, use with with caution. Instead, you can use local variables to cache the properties you need to access.

(3). Avoid using eval and Function

Every time an eval or Function constructor acts on the source code represented by a string, the scripting engine needs to convert the source code into executable code. This is a very resource-intensive operation - often more than 100 times slower than a simple function call. 
  The eval function is particularly inefficient. Since the content of the string passed to eval cannot be known in advance, eval interprets the code to be processed in its context, which means that the compiler cannot optimize the context, so it can only be interpreted by the browser at runtime code. This has a big impact on performance. 
  The Function constructor is slightly better than eval because using this code doesn't affect surrounding code; but it's still slow. 
  Also, using eval and Function is also not beneficial for Javascript compression tools to perform compression.

(4). Reduce scope chain lookup

The previous article talked about the problem of scope chain lookup, which is especially important in loops. If you need to access a variable in a loop that is not in this scope, use a local variable to cache the variable before traversing, and rewrite that variable after the traversal is over. This is especially important for global variables, because global variables are in scope The top of the chain has the most number of lookups when visiting. 
Inefficient way of writing:

[javascript]  view plain copy  
  1. <span style= "font-size:14px;"> // global variable   
  2. var globalVar = 1;   
  3. function myCallback(info){   
  4.     for (  var i = 100000; i -;) {   
  5.         //Every time you access globalVar, you need to find the top of the scope chain. In this example, you need to access 100,000 times   
  6.         globalVar += i;   
  7.     }  
  8. }   
  9. </span>  


More efficient writing:

[javascript]  view plain copy  
  1. <span style= "font-size:14px;"> // global variable   
  2. var globalVar = 1;   
  3. function myCallback(info){   
  4.     //local variable cache global variable   
  5.     var localVar = globalVar;   
  6.     for (  var i = 100000; i -;) {   
  7.     //Accessing local variables is the fastest   
  8.     localVar += i;   
  9.     }   
  10.     //In this example, only 2 global variables need to be accessed  
  11.     In the function, you only need to assign the value of the content in globalVar to localVar  
  12.     globalVar = localVar;   
  13. }  
  14. </span>  


Also, to reduce scope chain lookups you should also reduce the use of closures.

(5). Data access

  Data access in Javascript includes literals (strings, regular expressions), variables, object properties, and arrays. Access to literals and local variables is the fastest, and access to object properties and arrays requires more overhead. . When the following situations occur, it is recommended to put data into local variables: 
  a. Access to any object property more than once 
  b. Access to any array member more than once 
  In addition, the depth of the object and array should be reduced as much as possible Find.

(6). String concatenation

Using the "+" sign to concatenate strings in Javascript is relatively inefficient, because each run will open up new memory and generate a new string variable, and then assign the concatenated result to the new variable. A more efficient way to do this is to use the join method of the array, that is, put the strings that need to be concatenated in the array and finally call its join method to get the result. However, since the use of arrays also has a certain overhead, this method can be considered when there are many strings that need to be concatenated.

10. CSS selector optimization

In the opinion of most people, browsers parse CSS selectors from left to right. For example 
#toc A { color: #444; }, if such a selector is parsed from right to left, it will be very efficient, because the first ID The selection basically limits the scope of the search, but in fact the browser parses the selector from right to left. Like the selector above, the browser has to traverse to find the ancestors of each A tag, which is not as efficient as previously thought. According to this behavior of the browser, there are many things to pay attention to when writing selectors. Interested children's shoes can learn about them.

CDN acceleration

The essence of CDN (contentdistribute network, content distribution network) is still a cache, and the data is cached in the place closest to the user, so that the user can obtain the data at the fastest speed, that is, the so-called first hop of network access, as shown in the following figure.


Since the CDN is deployed in the computer room of the network operator, which is also the network service provider of the end user, the first hop of the user request route reaches the CDN server. When the resource requested by the browser exists in the CDN, the CDN will It is directly returned to the browser, and the shortest path returns the response, which speeds up user access and reduces the load pressure of the data center. 
CDN caches generally static resources, such as pictures, files, CSS, script scripts, static web pages, etc., but these files are frequently accessed, and caching them in CDN can greatly improve the opening speed of web pages.

reverse proxy

The traditional proxy server is located on the side of the browser, and the proxy browser sends http requests to the Internet, while the reverse proxy server is located on the side of the website computer room, and the proxy website web server receives http requests. As shown below:


The forum website caches popular entries, posts, and blogs on the reverse proxy server to speed up user access. When these dynamic content changes, the internal notification mechanism will notify the reverse proxy cache of invalidation, and the reverse proxy will reload the latest update. Dynamic content is cached again.

In addition, the reverse proxy can also realize the function of load balancing, and the application cluster built through the load balancing can improve the overall processing capacity of the system, thereby improving the performance of the website under high concurrency.

Guess you like