foreword
In the previous article, front-end performance optimization: high in performance, large in scope, and all the necessary pre-knowledge! (Part 1) Introduced some pre-knowledge related to front-end performance optimization, then this article summarizes the optimization plan, the core direction is still the content mentioned in the previous article:
-
保证资源更快的 加载速度
: The faster the rendering, the faster the view will be displayed -
保证视图更快的 渲染速度/交互速度
: The user interacts with the page, the premise is that the page should be rendered, and the second is that the page needs to be fed back as soon as possible, the purpose is to ensure a good user experience
In fact, after translating the above two points, it actually refers to network-level optimization and browser-level optimization . In this way, the direction of front-end performance optimization is still very clear, but the clear direction will still involve different aspects. specific means of optimization.
Still have to review the core process from input URL
to page loading :
-
parse
DNS
_ -
establish
TCP
connection -
Client sends
HTTP
request -
Server Response
HTTP
Resources -
The browser gets the response content, parses and renders it
From the above content, it is not difficult to find that each step takes a certain amount of time, so the direction of optimization can be considered around these contents.
Long text warning❗️❗️❗️
How to ensure faster loading of resources?
The following content mainly focuses on the optimization in the stages of DNS
parsing , TCP
connection , HTTP
request/response , etc. The core of core optimization is actually the network level .
Use to reduce DNS query time dns-prefetch
dns-prefetch
It can resolve the domain names of different domains that may be used in advance , cache the resolution results in the system cache , shorten DNS
the resolution time and improve the website access speed.
For example, it is reflected in the Nuggets as follows:
[ Extension ]
DNS
The core process of parsing
When a browser accesses a domain name, it needs to be resolved once DNS
to obtain IP
the address of the corresponding domain name:
-
The browser will query whether there is an address corresponding to the domain name from its own cache , and return if it exists, or go to the next step if it does not exist
IP
-
The client checks the file in the operating system to check whether there is an address corresponding to the domain name, if it exists, return it, if it does not exist, go to the next step
/etc/hosts
IP
-
The client requests the local domain name server (LDNS) for resolution. After receiving the request from the client, the local domain name server first checks whether its own cache information has the
IP
address of the corresponding domain name. If there is a result returned, go to the next step -
The local domain name server requests the root domain name server to resolve the domain name, and the root domain name tells the local domain name server to find the corresponding first-level domain name server
-
The local domain name server requests the first-level domain name server to resolve the domain name, and the first-level domain name server tells it to find the corresponding second-level domain name server
-
The local domain name server requests the second-level domain name server to resolve the domain name, and the second-level domain name server tells it to find the corresponding sub-domain name server
-
The local domain name server requests the sub-domain name server to resolve the domain name, and the sub-domain name server returns the corresponding IP address
-
The local domain name server
IP
records the address in the cache and returns it to the client (it will be cached), and the clientIP
visits the website according to the received address
Use preconnect
the pre-established connection
preconnect
The role of is to establish a connection with third-party resources in advance. If it is set, the browser will do the early connection work, but this connection will usually only be maintained 10 s
.
For example, before the current domain requests a resource DNS
, addressing, TLS
handshaking, TCP
handshaking, redirection, etc. may be involved, and this process will also take a certain amount of time.
For example, it is reflected in the Nuggets as follows:
Use preload / prefetch
preloaded resources
preload
preload
The role of is to load the key resources corresponding to the page in advance to speed up the rendering of the page, preload
and the priority order of as
the page is related to attributes.
[ Note ]
as
The attribute must be set. In addition to the setting priority mentioned above, it also involves the issue of browsing identification: if theas
attribute is not set, the subsequent request will be regarded as aXHR
request, which means that the resource is preloaded. The function may fail, because a new request may be initiated each time to obtain
For example, it is reflected in the Nuggets as follows:
For example , the vue-cli
default webpack
configuration is as follows:
prefetch
preload
It is the preloading of resources. Although it is loaded in advance, it is only executed when it needs to be executed, that is, this resource must be the resource required by the current page. If it is necessary to preload resources for the next page, then it should be used, and it will be used prefetch
when browsing Download resources when the server is idle .
For example , the vue-cli
default webpack
configuration is as follows:
Compress resource size
Resources need to http
be transmitted in the network in the form of data packets, so as long as the size of the transmitted data packets can be reduced, the resources can reach the client faster, which is also the core purpose of compressing the volume of resources.
HTTP compression
A typical representative of HTTP compression is that gzip
it is an excellent compression algorithm that can http
compress some file resources in the request. Generally speaking, it needs to be processed on the server side, and the Content-encoding: gzip
current resource can be represented by setting in the response header The compression method used (such as: gzip、deflate、br
etc.) is convenient for the client to use the correct method to decompress.
[ Note ]
gzip
It is not a panacea. It cannot guarantee that the compression of each file can make its size smaller. For more information, pleaseContent-Encoding
refer to content-encoding. Besides gzip, what else do you know?
For example, in Jingdong:
For example, in the Nuggets:
webpack compression
Is HTTP
n't compression enough? Why do you still need Webpack
compression?
First of all, it must be clear that the compression process itself will consume time. If all resources are compressed by the server after they are accessed, the client will still be in a waiting state before the compression is completed, that is, there is still no guarantee that the resources will be available . The fastest speed to reach the client .
Then the optimization solution is to put the time of compressing resources into the packaging and construction. After all, only when the online production environment needs to be released, a series of packaging and optimization operations need to be performed. Compared with the request/response speed, this slightly http
prolongs the product There is no big problem with packing time.
Some Webpack plug-ins will be listed below , but the specific usage will not be discussed, because these are just different solutions to achieve the purpose. If each solution is explained in detail, it can be an exclusive article. This is not necessary here. The specific usage You can check it yourself.
use
CompressionPlugin
zip file
webpack
This plug-in is included in the document providing plug-in collection, and its functions are: Prepare compressed versions of assets to serve them with Content-Encoding.
use
HtmlWebpackPlugin
zipHTML
file
Usually we need HtmlWebpackPlugin
plug-ins to generate correspondence or automatically inject resources HTML
into existing templates. In addition, it can also configure options to achieve the purpose of compressing templates.HTML
webpack bundles
minify
You can vue
execute it under the project vue inspect --mode production > webpack.config.js
to view the default webpack
configuration content of the scaffolding, such as:
Use
SplitChunksPlugin
a custom subpackage strategy
Webpack
By default, as many module codes as possible will be packaged together. The advantages and disadvantages of this default rule are obvious:
-
Advantages: can reduce
HTTP
the number of requests for the final page -
shortcoming:
-
The initial code package of the page is too large, which affects the rendering performance of the first screen
-
Unable to effectively apply browser cache
-
SplitChunksPlugin
It is Webpack 4
the latest subcontracting scheme implemented in the future. Compared with Webpack 3
the one in , it can be compiled into different packages CommonsChunkPlugin
based on some more flexible and reasonable heuristic rules , and finally build an application product with better performance and more cache-friendly .Module
Chunk
For example , the vue-cli
default configuration is as follows: webpack
Use
MiniCssExtractPlugin
Extraction and CompressionCSS
MiniCssExtractPlugin
will CSS
extract into separate files, creating a file for each included file CSS
, and supports on-demand loading of and .JS
CSS
CSS
SourceMaps
For example , the vue-cli
default webpack
configuration is as follows:
Use
ImageMinimizerWebpackPlugin
compressed image resources
Images are still Web
an indispensable resource in an application, and the size of image resources is also one of the bottlenecks for loading pages on the first screen. Therefore, compressing images is also something that needs to be considered for performance optimization.
ImageMinimizerWebpackPlugin
It can be used to optimize/compress all images, and it can support lossless (no loss of quality) and lossy (quality reduction) compression modes.
By
Tree Shaking
removing dead code
Tree Shaking
Depending on the static structural features ES6
of the module syntax (such as: and ), when the mode is , more optimizations can be enabled , including compressed code and Tree Shaking .import
export
webpack
mode
"production"
But at the same time we must ensure that:
-
Try to use
ES6
module syntax, i.e.import
andexport
-
Guaranteed that no compiler (eg:
babel
) converts the correspondingES6
module syntax toCommonJS
syntax (eg:@babel/preset-env
the default behavior) -
Attributes can
package.json
be added to the project file to identify whether the current content has side effects"sideEffects"
-
/*#__PURE__*/
Function calls can be marked as side-effect-free via annotations
Reduce the number of http requests
The number of requests under different protocols may still be the cause of slow request/response :
-
Merge public resources, such as sprite maps, etc.
-
Built-in module resources, such as generating
base64
images, passingsymbol
referencessvg
, etc. -
Merge code blocks, such as building tool subpackage strategy with public component encapsulation, component reuse logic extraction, etc.
-
Load resources on demand, such as route lazy loading, image lazy loading, pull-up loading, page loading, etc.
Reduce unnecessary cookies
Unnecessary cookie
back and forth transfers waste bandwidth:
-
Reduce
cookie
what is stored -
Hosting is used for static resources
CDN
(that is, non-same domain), and different domain names do not carry by defaultcookie
CDN hosting static resources + HTTP caching
CDN
The essence of acceleration is cache acceleration . The static resources stored on the server are cached on CDN
the node. When accessing these static content later, you do not need to visit the source site of the server. You CDN
can choose the nearest node to access, so as to achieve the effect of acceleration and reduce server traffic. Origin pressure.
It is reflected in the Nuggets as follows:
Protocol upgrade to Http2.0
http1.x
Existing problems:HTTP
the underlying protocol isTCP
,TCP
but connection-oriented, which requires a three-way handshake to establish a connection, where:
-
http1.0
The short connection is used , that is, the connection will be disconnected after the end of a request/response . This process is time-consuming -
http1.1
The long connection is used in , which can be turned on by setting it in the request/response headerConnection: keep-alive
. The advantage is that the long connection allows multiple requests to share oneTCP
connection. The disadvantage is that it brings head-of-line blocking :-
TCP
Multiple requests in each connection need to be queued, and only when the request at the head of the queue is responded can the next request be processed -
One of the mitigations is to put some requests on other connections if head-of-line blocking
TCP
occurs in the current connectionTCP
-
Browsers generally restrict the establishment of
6-8
links to the same domain nameTCP
, which is one ofCDN
the reasons why it is necessary to divide sub-domain names and static resource hosting for applications
-
-
http1.x
The content in the middleheader
part may be very large, and each request may need to carry a large amount of repeated text content , which is also one of the reasons for the slow request/responseheader
The above problems
http2.0
can be solved:
-
To solve the problem that the number of TCP connections is limited, multiplexing
http2.0
is used to only correspond to one connection per domain nameTCP
-
Aiming at the head-of-http blocking problem,
http2.0
the binary framing layer is added to each request/responsestream id
to ensure that the request/response corresponds one-to-one, that is, there is no need to wait for the previous request to be processed, and it is also possible to add priority to each request -
For
header
the problem of large data, the frameshttp2.0
transmitted in the mediumheader
will be represented in binary format after processing, replacing the original text format, andHPACK
compressed using algorithms-
The receiving and sending ends will maintain an index table , which is identified by a subscript , and the corresponding index can be used to replace the
header
subsequent repeated informationheader
-
-
For the traditional request->response mode, the server
http2.0
provides the ability to push the server , so that the server can actively push key resources to the client to speed up resource loading
For example, in the Nuggets:
How to ensure faster rendering and interaction of views?
After ensuring that the resources reach the client quickly, it is necessary to optimize the parsing and rendering of the browser , and of course the optimization of the subsequent page interaction, which is actually the optimization at the browser level .
The core process of browser rendering HTML
files:
-
HTML interpreter :
HTML
output the document through lexical analysisDOM Tree
-
CSS interpreter : parses
CSS
the document and generates style rulesCSSOM
-
Style Computing : Combine
DOM Tree
andCSSOM
GenerateRender Tree
-
Layout calculation : calculate
Render Tree
the coordinate position of the node in the page, createLayout Tree
-
Divide layers : There are many complex effects on the page, such as some complex
3D
transformations, page scrolling, orz-index
axisz
sorting, etc. In order to achieve these effects more conveniently, the rendering engine also needs to generate dedicated layers for specific nodes. Generate the corresponding layer treeLayer Tree
-
Layer drawing:
-
When the dyeing engine realizes the drawing of a layer, it will split the drawing of a layer into many small drawing instructions , and form a list to be drawn in order of these instructions
-
When the drawing list of the layer is ready, the main thread will submit ( ) the list to be drawn to the compositing thread
commit
-
-
Rasterize the raster:
-
Due to the limited viewport, the user can only see a small part of the page, and it is not necessary to draw all the layer content, so the compositing thread divides the layer (layer) into tiles (tiles)
-
The rendering process sends the instruction to generate the tile to and
GPU
executes the bitmap that generates the tile
-
-
Compositing and displaying:
-
Once all tiles are rasterized, the compositing thread generates a command to draw the tiles --
DrawQuad
and submits that command to the browser process -
There is a viz component in the browser process , which will
DrawQuad
draw its page content into the memory according to the command, and finally display the memory on the screen
-
rendering layer
Factors that reduce rendering blocking
Before actually rendering the view, it is necessary to generate DOM Tree
and CSSOM
, so it is necessary to ensure that both the HTML interpreter and the CSS interpreter are processed as soon as possible. Simultaneous JavaScript
loading and execution may block this process:
-
HTML
The number of nodes rendered for the first time in the document should be as small as possible, avoid deep nested structures, avoid extensive use of slow tags (such as:iframe
), etc. -
CSS
Resources are placed in the head of the document to reduceCSS
complexity, such as usingCSS
selectors reasonably -
JavaScript
Resources are placed at the bottom of the document, anddefer、async
the loading method is reasonably used
lazy loading
Lazy loading is mainly for the situation of large quantity and slow loading of resources, such as image resources, large list data display, etc.:
-
Image resources : Prioritize loading images within the viewable area, images outside the viewable area
延后加载
, or reload when moving into the viewable area -
List data : List data usually has a large amount of data, and it is impossible to render all the data at once. Generally,
分页加载、上拉加载
it is rendered in batches by other means
White screen optimization
The white screen is caused by the fact that SPA
the application needs to wait for JavaScript
the loading and execution to complete before generating the specific page structure content, that is, there is no meaningful HTML
structure in the initialization template that needs to be rendered:
-
Add a white screen
loading
, you can add a defaultloading
effect to the template, and wait until the real page content is rendered to replaceloading
the content -
Adding a skeleton screen is consistent with the above scheme. Before the real page content is displayed, the default view content is displayed first to avoid a white screen
server-side rendering
The modern framework belongs to the client-side application framework by default, that is, the code of the component will run in the browser, and then output DOM elements to the page, also called client-side rendering (CSR) :
-
advantage
-
用户体验更好
, the method based on front-end routing does not actually perform page jumps , that is, it does not refresh and load the page, which brings higher fluency -
占用服务端资源少
, CSR rendering is handed over to the client for processing, and the server does not need to care about the rendering calculation process, which reduces the pressure on the server
-
-
shortcoming
-
"白屏" 时间较长
, mainly because the support required for CSR rendering must be guaranteed to be received and parsed, and it is strongly dependent on the current network environment . Therefore, the white screen time is too long in a poor network environment , especially in a mobile network environment.*.js
*.js
*.html
*.html
-
对 SEO 的支持不友好
, because the white screen time is long , no important content can be submitted to the search engine for analysis, classification, labeling, etc. for a period of time, and the search engine will not wait for the page rendering to complete, so it is not friendly to SEO optimization
-
Server-side rendering (server-side rendering, SSR) can render the same components into corresponding HTML
strings in the service and send them to the browser for rendering, that is, the client does not need to wait for all JavaScript to be downloaded and executed before displaying. So users can see the fully rendered content faster.
Pre-rendering (prerender)
Although the above-mentioned server-side rendering (SSR) can solve some problems of clients, it also brings other problems:
-
需要保证开发一致性
, for example, the component lifecycle hooks that can be executed by the server and the client are different, and some external libraries may need special processing in server-side rendering applications -
需要更多的构建设定和部署要求
, a fully static SPA can be deployed on any static file server, but a server-side rendering application requires an environment capable of running a Node.js server -
更多的服务端负载
, rendering a complete application in Node.js will generate more CPU- intensive operations than only serving static files, and you need to consider the situation of excessive access traffic, etc.
Therefore, not all applications are suitable for server-side rendering . If you just want to improve the SEO of some promotional pages (such as , , , etc.) through SSR , you should give priority to the way of pre-rendering :/
/about
/contact
-
Pre-rendering
routes
is to pre-generate the corresponding page content for the corresponding route during the packaging and construction process (off-screen state) -
Pre-rendering needs to cooperate with packaging and building tools (webpack, rollup, etc.) , for example, pre-rendering
webpack
can beprerender-spa-plugin
supported by
Interactive level
Reduce reflow/redraw
Redrawing : When the change of the style of the element in the page does not affect its position in the document flow (such as: color、background-color、visibility
etc.), the browser will give the new style to the element and redraw it
Reflow : Render Tree
When the size, structure, and certain attributes of some or all elements change, the browser re-renders part or all of the document
-
Reduce
DOM
frequent operations on -
Keep frequently changing elements out of the document flow, such as continuous animation effects, which will always trigger reflow and redraw
-
Avoiding access or reducing access will cause the browser to forcibly refresh the attributes of the queue, such as:
offsetTop、offsetLeft、offsetWidth
etc.-
[ Extended ] The browser's rendering queue mechanism will store the operations that will trigger reflow or redraw through the queue , and wait until a certain time or a certain number of operations before performing these operations
-
-
Avoid
css
making a single modification, such as whenJavaScript
modifying multiple styles, try to usecss
selectors to achieve centralized changes in styles -
With
will-change
acceleration enabledGPU
,will-change
the specified attribute allows the browser to make corresponding optimizations in advance before the element attribute actually changes. -
Set the image size in advance to avoid reflow after the image resource is loaded
Anti-Shake/Throttling
Anti-shake : multiple frequent trigger execution operations, the last one shall prevail, ignoring the intermediate process
Throttling : In the specified time interval, only one corresponding operation is allowed
Reasonable use to 防抖/节流
optimize the operations in the application, for example, 节流
it can be used to optimize scrolling events, fuzzy search, etc., and 防抖
can be used to optimize some button click operations, etc.
Web Worker
JavaScript
It is single-threaded. If there is a scene that requires a lot of calculations (such as video decoding), UI
the thread will be blocked, and even the browser will freeze directly.
Web Worker
Scripts can be run in new threads, which are independent of the main thread, and can perform a large number of computing activities without affecting the UI
rendering of the main thread, but cannot be abused Web Worker
.
virtual list
The most commonly used method is pagination loading :
-
Table -based
table
rendering will only render a fixed number ofDOM
-
List- based
上拉加载
rendering, with the increase of loaded data, the correspondingDOM
nodes will also increase, and the page will definitely freeze when it reaches a certain limit
The core of the virtual list is to fix the number of renderings DOM
, update the view by dynamically switching the data content, and ensure that the actual DOM
number in the document does not increase with the increase of the data volume (in fact, it table
is very similar to pagination, but it supports scrolling).
Uploading Large Files in Parts
The file upload function is indispensable for most projects, but it is still necessary to optimize the upload of large files. The so-called resumable upload and second transfer are all based on the core function of uploading in pieces .
Excel import/export
For Excel import/export function, I believe many people’s first impression is that the back-end work, but in most cases, the processing speed of the back-end interface will be affected by various factors, resulting in unsatisfactory speed, and sometimes it needs to be done by the front-end For optimized processing, for example, the front end does not send files but only parsed data when importing JSON
, and does not need to send additional interfaces separately when exporting, and directly uses the current display data to achieve export, etc.
Optimization of Vue projects
I believe everyone is familiar with this part of the content. Here is a brief list of some content (including but not limited to):
-
Reduce the generation of responsive data. For
template
data that is purely displayed and needs to be used in templates, you canObject.freeze()
freeze it to avoid being converted to unnecessary responsive data -
Vue
Component initialization is a loss of performance. Use functional components to reduce the process of component initialization. It is suitable for implementing simple components that have no business logic and only display content -
Fair use
v-show
and , set unique (not )v-if
forv-for
components , and don't use together, etc.key
index
v-for
v-if
-
Use
KeepAlive
reusable components to avoid performance loss caused by repeated creation and destruction of components -
Use
() => import(xxx)
method to implement lazy loading of routes -
The method used
ESM
to encapsulate custom tool libraries, etc. -
Import on demand for third-party libraries
-
Reasonable use of closures to avoid memory leaks
-
Clear side effects in components in a timely manner, such
setTimeout、setInterval、addEventListener
as
Summarize
The above optimization scheme corresponds to the performance indicators mentioned in the previous article , as follows:
-
first byte arrival time (
Time to First Byte,TTFB
) -
first draw(
First Paint,FP
) -
First ContentPaint(
First Contentful Paint,FCP
) -
First Screen Time/Maximum Content Draw(
Largest Contentful Paint, LCP
) -
cumulative layout offset (
Cumulative Layout Shift, CLS
) -
first input delay (
First Input Delay, FID
)
The sooner key resources arrive at the client, TTFB
the shorter the proof time, which can also indirectly reduce the FP
sum FCP
of time; compressing resources means that LCP
the time can be improved as much as possible; reducing page reflow/redrawing can make CLS
The smaller the value, the more stable the view is; FID
it is an indicator used to track the delay time before the browser responds to user input, including clicks and taps, to ensure fast loading of resources and early rendering of the page, and its corresponding The lower the value, the more responsive the view will be.
at last
The scope of front-end performance optimization is too large. The optimizations listed above mainly focus on resource loading, page rendering/interaction , and there are actually many specific optimization solutions (including but not limited to the above). The direction of attention varies.