Take the project at work as an example to practice the ten common front-end performance optimization methods

1. Record the initial state before optimization

1. View the loading time of the initial rendering of the first screen

Open the control panel, check to disable the cache, refresh the first screen of the project, and get the resource loading status of the first rendering of the first screen.
Mark 1:
DomContentLoaded: It takes 11.64 seconds for the dom content to be loaded, which corresponds to the blue vertical line under the waterfall bar.
Mark 2:
Loading time: It takes 11.66 seconds for all resources on the page (pictures, audio, video, etc.) to be loaded, corresponding to the red vertical line under the waterfall bar.
We mainly pay attention to the indicator of the loading time of the first screen. Here, 11.66 seconds is too long, so we focus on the resource files with the latest loading time, that is, the resource files at mark 3 and mark 4.
Observing the waterfall chart at mark 4, we found that the file with the longest loading time is vendor.js, which is the packaged library file of our project. If we can add the third-party Split the package from the inside, and then load it when it is needed or when the page is idle after the first screen is loaded, then the loading time of the first screen will be shortened. Similarly, the index.js file that takes a long time to load is our packaged business code file. If we can use route lazy loading to split other route files, then the first screen will be rendered faster.
Mark 3 is the loading of the cdn resources of antd and icons, because this resource cannot be changed, so it may determine the upper limit of our first screen loading time optimization, our best result may be to load in the antd.js file When it is finished, the loading of the first screen is completed, so here is probably the sum of the time of entering the queue of antd.js and the loading time of antd.js in the second picture, and seven or eight seconds is the upper limit of our optimization.

insert image description here

2. Check the secondary loading time

Let's uncheck the option to disable the cache, refresh the page again, and check the secondary rendering. Because the secondary rendering takes advantage of the cache, the load time is 856 ms, which is quite good.insert image description here

3. Lighthouse score

insert image description here

2. Performance optimization (on) (access efficiency)

(1) Remove console.log

1. Configuration method

// webpack.prod.js

const TerserJSPlugin = require('terser-webpack-plugin')

module.exports ={
    
    
  //...
 optimization: {
    
    
    minimizer: [ 
      new TerserJSPlugin({
    
    
        //...
        terserOptions: {
    
     
          compress: {
    
     
            drop_console: true 
          } 
        }
      }
   )]
  //...
}

2. Optimize the results

We use the webpack-bundle-analyzer plug-in to check the volume change of the business code file index.js before and after removal, which is indeed smaller.
Before removal:
insert image description here

After removal:
insert image description here

(2) Split the vendor library files that are not needed for the first screen

Using the webpack-bundle-analyzer plug-in analysis, it is found that the three largest packages in the vendor.js file are monaco-editor, antd, and
Tuya public component library tuya-fe.
As a result, antd was also found in the vendor, and it was confirmed that the antd version in the cdn and the vendor were consistent, so the antd in the vendor was packaged repeatedly, so the author excluded antd from the packaging in the external node configuration of webpack out of scope)
insert image description here

Because the first screen of monaco-editor and tuya-fe is not needed and is very large, we will unpack monaco-editor and tuya-fe below.

1. Configuration method

//webpack.prod.js

optimization: {
    
    
     //....
    splitChunks: {
    
    
      chunks: 'all',
      // 缓存分组
      cacheGroups: {
    
    
        // 拆分monaco-editor
        monacoEditor: {
    
    
          chunks: 'async',
          name: 'chunk-monaco-editor',
          priority: 22,
          test: /[\/]node_modules[\/]monaco-editor[\/]/,
          enforce: true,
          reuseExistingChunk: true,
        },
        // 拆分tuya-fe
        tuyaComponent: {
    
    
          chunks: 'async',
          name: 'chunk-tuya-component',
          priority: 12,
          test: /[\/]node_modules[\/]@tuya-fe[\/]galaxy-public-components[\/]/,
          enforce: true,
          reuseExistingChunk: true,
        },
        vendor: {
    
    
          name: 'vendor', // chunk 名称
          priority: 1, 
          test: /[\/]node_modules[\/]/,
          minSize: 0, // 大小限制
          chunks: 'all',
          minChunks: 1, // 最少复用过几次
        },
        // 公共的模块
        common: {
    
    
          name: 'common', // chunk 名称
          priority: 0, // 优先级
          minSize: 0, // 公共模块的大小限制
          minChunks: 2, // 公共模块最少复用过几次
        },
      },
    },
   //...
  },

2. Optimize the effect

Before vendor.js is split,
insert image description here
after vendor.js is split:
insert image description here

(3) Routing lazy loading & asynchronous routing merging & asynchronous routing prefetch

Asynchronous routing prefetch:
We hope to use magic annotations to enable asynchronous routing files to be downloaded in the browser's idle time after the first screen is loaded, so that other routing accesses can be made faster (since the first screen has requested the routing file, wait until When accessing this route, directly obtain this file resource from the pre-cache)

Asynchronous route merging:
Since each route file except the first screen is very small in size, if it is split out one by one, there will be too many http requests, so we merge multiple routes by naming the same webpackChunkName routing file.

1. Configuration method

// router-view.tsx


import {
    
     Suspense } from 'react'

<Router>
      <Suspense fallback={
    
    <div>loading...<div/>}>
        <Switch>{
    
    routeRender(routes)}</Switch>
      </Suspense>
</Router>

// router.ts

import {
    
     lazy } from 'react'
const routes: RouteItem[] =[
{
    
    
  {
    
    
    key: 'addVirtualDevice',
    title: '创建设备',
    path: '/virtual/device/create',
    component: lazy(() =>
      import(
        /* webpackChunkName: "chunk-other-pages", webpackPrefetch: true */ '@pages/virtual/create-device'
      ),
    ),
  },
  {
    
    
    title: '版本信息列表',
    path: '/virtual/firmware/version/management',
    component: lazy(() =>
      import(
        /* webpackChunkName: "chunk-other-pages", webpackPrefetch: true */ '@pages/virtual/firmware-list/firmware-version-management'
      ),
    ),
    hide: true,
  },
  
 ]

2. Optimize the effect

Before route splitting:
insert image description here

After routing split:

insert image description here

3. Performance Optimization (Part 1) Summary of Optimization Effects

1. Check the initial rendering time of the first screen

The loading time has been reduced from 11.66 seconds to 6.32 seconds.
Mark 2 in the figure below, the asynchronous route we split from the index.js business code file will be loaded when the browser is idle after the first screen is loaded, so that the index.js file contains the first screen code Smaller in size, it can load the contents of the first screen as soon as possible.
At the same time, at mark 1 in the figure below, we can see that after implementing all the optimization methods mentioned above, the last resource file loaded is the cdn external link resource of antd. The optimization of screen loading time has basically reached the upper limit of optimization.
insert image description here

2. About the effect display of asynchronous routing prefetch

As shown in the figure below, we can see that the function of prefetch is that when a non-first screen route is accessed, because the route file is preloaded on the first screen, the route file is directly obtained from the prefetch cache, and the time spent is only 2 millisecond.
insert image description here

3. Check the secondary loading time

Reload time reduced from 856ms to 818ms.
insert image description here

4. lighthouse score:

The score has increased from 75 to 91.
insert image description here

4. Performance optimization (medium) (package build speed, development efficiency)

(1) Parallel compression of JS

TerserWebpackPlugin supports multi-process code compression, which can improve project construction speed.
The plug-in TerserWebpackPlugin has enabled the parallel compression capability by default, and usually keeps the default configuration, that is, parallel = true to obtain the best performance benefits

1. Configuration method

const TerserJSPlugin = require('terser-webpack-plugin') 
module.exports ={
    
    
  //...
optimization: {
    
    
    // 并行压缩 js
    minimizer: [ 
      new TerserJSPlugin()
    ],
  }
  //...
}

2. Optimize the effect

As shown in the figure below, we can see that after parallel compression is used, the project build speed in the production environment is reduced from 23.15s to 12.67s. The
result of running yarn run build without parallel compression:
insert image description here
the result of running yarn run build with parallel compression :
insert image description here

(2) no-parse

By default, no matter whether the imported module (library) depends on other modules (libraries), its dependencies will be analyzed, but for some independent modules (libraries), there is no dependency at all, but webpack will still To analyze its dependencies will greatly reduce our packaging speed.
So for some independent modules (libraries), we can tell webpack in advance not to analyze its dependencies so that our packaging speed can be improved.

The lodash library is used in our project, because this library does not introduce other packages, then we can tell webapck not to analyze the dependencies of lodash. (Independent libraries are common jquery, lodash)

1. Configuration method

module: {
    
    
    noParse:'/lodash/'
}

2. Optimize the effect

The author overturned after practice. The speed of project construction in both the development environment and the production environment has not improved significantly, and sometimes it is even slower.
The author later checked the lodash introduced in the next project, and found that the whole project only introduced the debounce function as needed, probably because the imported lodash was too small, so there was no obvious change.
After the author changed the on-demand import to full import, I found that there was a more obvious speed increase.
When all lodash is introduced as follows, using noParse, the build speed in the development environment has increased from 4917ms to 4772ms
insert image description here

insert image description here

In the case of introducing lodash as follows, using noParse, the build speed in the production environment has increased from 16.94s to 16.43s
insert image description here

insert image description here
According to the actual situation of the project, the actual effect of optimization makes a trade-off for the optimization method, so the author does not use no-parse to optimize the project.

(3) Hot update

That is, after you write something in the page user interaction (input text in the input box, select the drop-down box, etc.), then change the file, devServer repackages, causing the page to refresh, and the interactive things are gone, then if you want to
interact The things are still there, and the changed things can be updated to the page, and the page is not refreshed, you need to use hot update.
The hot update plugin is the webpack built-in HotModuleReplacementPlugin.
But in practice, the author found that even if hot update is not configured, the hot update of css module is also realized.
After consulting, we know that for the css module, the css-loader has helped us realize the hot update, as long as the css code is modified, it will be updated immediately.
The implementation of JS module hot update is difficult. I used to configure the project according to the hot update configuration on the webpack official website, but it didn’t work. After checking, I found out that the hot update configuration of native js and react+ts are the same. Different, the official website is the configuration of native js.

1. Configuration method (react+ts JS module hot update configuration)

Article reference:
https://www.codeleading.com/article/16042772093/
https://github.com/gaearon/react-hot-loader
shadow device project configuration method
// app.tsx

import {
    
     hot } from 'react-hot-loader/root'

export default hot(App)

//src/index.tsx


if (ENV === 'local') {
    
    
  const realModule = module as any
  if (realModule.hot) {
    
    
    realModule.hot.accept(() => {
    
    
      ReactDOM.render(
        <>
          <WrapperApp onGlobalStateChange={
    
    null} />
        </>,
        document.querySelector('#root'),
      )
    })
  }
}

//.barbelrc

{
    
    
  "plugins": [
    "react-hot-loader/babel"
   ]
  ]
}

// webpack.dev.js


module.exports = {
    
    
 devServer: {
    
    
     hot: true
 },
 plugins: [
    new webpack.HotModuleReplacementPlugin(),
  ],
}

2. Optimize the effect

(1) Before configuration
When it is not configured, in the development environment, we enter the word “copy” in the input box
insert image description here
and then we change the page title and add the word “hot update”. After saving, the page is refreshed and the content in the input box is cleared .
insert image description here
(2) After configuration
In the development environment, when we modified the title of the page and added the words 'hot update' and saved it, we found that the content in the input box was still there, and the js module hot update configuration was successful.
insert image description here

5. Performance optimization (below) (not applicable to libra release platform)

The author did not integrate the following four optimization methods into the project, because of the three optimization methods of gzip, http2 and cache strategy, the libra release platform has automatically realized it for us, and the pre-rendering is because of the overall situation in the project of the libra release platform Variable insertion is the reason for the process after the project build, and if you do pre-rendering, the CDN introduced in the project needs to exist before the build, and the libra upload CDN is after the project build, so this method does not apply to the libra release of our Tuya platform.

Preparations before optimization (using nginx to deploy shadow device projects):

Front-end project deploys nginx server

(1) Gzip

Enabling gzip compression can reduce the size of network transmission files and effectively accelerate the loading of web content.

1. Configuration method

Add the following configuration to the nginx.conf file:

http {
    
    
  gzip  on;
  gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
}

2. Optimize the effect

Before enabling gzip compression
, on the network panel of the console, the size column shows the network transmission size of the resource file, hover the mouse over it, and the displayed resource size is the size of the resource itself.
We can check whether it is gzip compression through the Content-Encoding column.
insert image description here

After enabling gzip compression:
By comparing the vendor.js file, we can see that after enabling gzip, the file network transfer size is reduced from 957kB to 308kB, which is less than one-third of the original, and the file loading time is also reduced from 10.08 seconds To 3.63 seconds.
The overall load time for the first screen has also been reduced from 13.28 seconds to 6.76 seconds.
insert image description here

(2) HTTP2

Compared with http1.1 and http2, what are the advantages of http2?
Advantages of http2:
1. Binary transmission
2. Request and response multiplexing
https is a necessary condition for us to use http2. We can only open http2 in the case of https.

1. Configure https server method

Front-end project deploys nginx server

2. Optimize the effect

As a result, the loading time of the first screen of http1.1 and http2 is about the same.
If lighthouse scores, the score of http2 is higher.
insert image description here

insert image description here

(3) Cache optimization

(1) Strong cache (Cache-Control, disk cache)

If the resource has not expired, there is no need to communicate with the server, and the locally cached resource is directly used.
insert image description here

insert image description here

Strong caching is the logic of setting cache-control in the response header to control strong caching, such as Cache-Control:max-age=31536000 (unit is second). As long as the resource expiration time is calculated based on the sum of the date in the response header and cache-control: max-age, compare it with the current time to obtain the resource again. As long as the resource has not expired, the strong cache will be hit.

Set the value of Cache-Control
max-age Set the expiration time of the cache
no-cache No local mandatory cache, normal request to the server, we don’t care how the server handles it (disable strong cache, negotiate cache can be used) no-
store It is more thorough to let the server return the resource again without local cache or server-side caching measures (both strong cache and negotiation cache are disabled)

private can only allow end users to cache left and right
public allows intermediate routing, intermediate proxy to do caching
ngix configures strong caching and disables negotiation caching method:
insert image description here
initial request:
date is the time of the initial request
insert image description here
second request (when the resource has not expired):
the author Refresh the page at 8:06, and get the vendor.js resource again, because according to the sum calculation of date and cache-control:max-age, it is
found that the resource has not expired, so this time hits the strong cache (from the status code It can be seen that the memory cache is displayed), and there is no request to the server at all (it can be seen from the pre-configuration header displayed in the request header)
insert image description here

Second request (when the resource expires):
If the resource expires, the interface is the same as the initial request, and a new request is made to the server, but the date in the response header will be updated to the time of the new request. After obtaining the resource, add the cache-control:max-age time to get the resource expiration time according to the time (date) of the last request for the resource from the server, and compare it with the current resource acquisition time, and hit the strong cache before it expires. When it expires, it will re-initiate the request to the server, and the response header date will be updated, and the cycle will repeat.

(2) Negotiation cache (Last-Modified and Etag, 304 status code)

The client asks the server whether the resource has changed. If the server checks (determines whether the resource of the client is the same as that of the server) and finds that the resource has not changed, the server tells the client that you can use your local cache instead of me. For you, the returned status code is 304. If the server finds that it is different, return 200 and a new resource.

insert image description here

insert image description here
Use the Etag resource identifier as a negotiation cache

insert image description here

Use Last-Modified as a negotiation cache
insert image description here

insert image description here
The example request shows that negotiating caching can reduce resource network transfer sizes.

insert image description here

insert image description here

nginx configuration disables strong caching and enables negotiation caching
insert image description here
The first request (two pictures are cut separately)
insert image description here
insert image description here
The second request

insert image description here

(3) Comprehensive process of negotiation cache and strong cache (priority: strong cache>etag>Last-Modified, the three can be set at the same time)

insert image description here

(4) Project cache optimization strategy

1. For html files (disable strong caching):
the single-page application has only one html file as the only entry, and all resources are loaded through this html file. If the resources are updated, we want the html to be cached at this time Expiration, so that users will not get old files. If html is strongly cached, it will always get old js, css and other resources.
For html files, we can use negotiated caching or no-store caching at all.
Nginx configuration (index.html is not cached at all):
insert image description here
configuration effect display:
as follows, because the cache is completely disabled, each time it goes to the server to re-acquire resources, and returns a status code of 200.
insert image description here

2. For static resource files such as js, css, and pictures (contenthash+strong caching)
(1) webpack configuration
configures output js, css and other static resource files as the file name plus the hash value of the content. Once the content changes, the content The hash value will change, and the name of the file will also change. Once the name of the file changes, the browser will automatically load the newly packaged file.
The hash value will only change when the content changes, which can avoid updating the published file, because The name has not changed, and the cache is used, but the old page content is still displayed; if
the content remains unchanged and the hash value remains unchanged, then the cache can be used to load resources faster;
(2) The cache configuration
will change because the file content changes, and the file name will change. The content of the file remains unchanged, and the file name remains unchanged, so we can set a strong cache.
nginx configuration
insert image description here
Configuration effect:
The browser automatically calculates 604800 seconds based on seven days, and automatically adds cache-control:max-age

insert image description here

(5) Precautions

In practice, the author found that the strong cache of lazy loading routing files does not take effect in Google's normal mode, but only in incognito mode or Firefox browser.

(4) Pre-rendering

Render the pages of our single-page application in advance during the packaging process. This speeds up the loading of the first screen.

1. Configuration method

(1) Install the react-snap plugin
(2) package.json file

 "scripts": {
    
    
    "postbuild":"react-snap",
  },
  "reactSnap": {
    
    
    "source": "dist",
    "minifyHtml": {
    
    
      "collapseWhitespace": false,
      "removeComments": false
    }
  },

(3) Modify the project content
1. Insert the global variables required by the project into the index.html file, such as the variables required by the shadow device are ENV and REGION
2. Configure the publicPath in the webpack as '/' instead of the cdn prefix, because the preset When rendering, it is found that the imported js does not exist, and an error will be reported.
Or we can also release the daily routine first, upload the js and css static resources to the cdn, and then perform the build pre-rendering operation.
3. Modify the index.js file

import ReactDOM, {
    
     hydrate } from 'react-dom'

function render(props) {
    
    
  const {
    
     container, onGlobalStateChange } = props
  // ReactDOM.render(
  //   <>
  //     <WrapperApp onGlobalStateChange={onGlobalStateChange} />
  //   </>,
  //   container
  //     ? container.querySelector('#root')
  //     : document.querySelector('#root'),
  // )
  const rootElement = document.querySelector('#root')
  if (rootElement.hasChildNodes()) {
    
    
    hydrate(
      <WrapperApp onGlobalStateChange={
    
    onGlobalStateChange} />,
      container ? container.querySelector('#root') : rootElement,
    )
  } else {
    
    
    ReactDOM.render(
      <WrapperApp onGlobalStateChange={
    
    onGlobalStateChange} />,
      container ? container.querySelector('#root') : rootElement,
    )
  }
}
if (!window.__POWERED_BY_QIANKUN__) {
    
    
  render({
    
    })
}

4. Conditionally judge the interface request on the home page (do not request it during pre-rendering),
because the content of the home page will be completely executed during pre-rendering, including the request interface. If the interface request is not found, an error will be reported.
insert image description here
(4) Execute pre-rendering
Execute yarn run build. After the build is completed, react-snap will be executed by itself.
insert image description here

2. Optimize the effect

We can see that the packaged index.html file contains content, which is the content of the home page of the shadow device.
insert image description here

6. Lighthouse score after optimization

Based on the above (optimization means to improve access efficiency in performance optimization (top) and performance optimization (bottom)), the lighthouse scoring results are as follows: It is the role
of pre-rendering to increase the score from 91 points to 99 points, because before optimization (bottom) All three have been implemented on the libra platform, which is reflected in the 91 points.
insert image description here

7. References

MOOC: Front-end performance optimization enterprise-level solutions 6 major angles + big factory vision
Corresponding course notes: talk about front-end performance optimization

Guess you like

Origin blog.csdn.net/m0_57307213/article/details/126986149