[Front-end interns prepare for autumn recruitment]—Front-end performance optimization, recommended to collect

[Front-end interns prepare for autumn recruitment]—Front-end performance optimization, recommended to collect

Performance optimization interview questions.png

1. CDN

1. The concept of CDN

CDN (Content Delivery Network ) refers to a computer network system connected to each other through the Internet. It uses the server closest to each user to deliver music, pictures, videos, applications and other files faster and more reliably. Send to users to provide high performance, scalability and low-cost network content to users.

A typical CDN system consists of the following three parts:

  • **Distribution service system:** The most basic unit of work is the Cache device. The cache (edge ​​cache) is responsible for directly responding to end-user access requests and quickly providing locally cached content to users. At the same time, the cache is also responsible for content synchronization with the source site, obtaining updated content and content that is not available locally from the source site and saving it locally. The number, scale, and total service capabilities of Cache devices are the most basic indicators to measure the service capabilities of a CDN system.
  • Load balancing system: The main function is to schedule access to all users who initiate service requests and determine the final actual access address provided to users. The two-level scheduling system is divided into global load balancing (GSLB) and local load balancing (SLB). Global load balancing is mainly based on the principle of user proximity and determines the physical location of the cache that provides services to users by making "optimal" judgments on each service node. Local load balancing is mainly responsible for device load balancing within the node.
  • Operation management system: The operation management system is divided into operation management and network management subsystems. It is responsible for handling the collection, sorting, and delivery work necessary for interaction with external systems at the business level, including customer management, product management, billing management, statistical analysis, etc. Function.

2. The role of CDN

CDNs are generally used to host Web resources (including text, images, scripts, etc.), downloadable resources (media files, software, documents, etc.), and applications (portals, etc.). Use a CDN to speed up access to these resources.

(1) In terms of performance, the role of introducing CDN is:

  • Users receive content from the nearest data center, with lower latency and faster content loading
  • Some resource requests are allocated to CDN, reducing the load on the server

(2) In terms of security, CDN helps defend against DDoS, MITM and other network attacks:

  • For DDoS: monitor and analyze abnormal traffic and limit its request frequency
  • For MITM: full-link HTTPS communication from the origin server to the CDN node to the ISP (Internet Service Provider)

In addition, CDN, as a basic cloud service, also has advantages in resource hosting and on-demand expansion (can cope with traffic peaks).

3. Principles of CDN

CDN and DNS are inseparable. Let’s first take a look at the domain name resolution process of DNS. The resolution process of entering www.test.com in the browser is as follows:

(1) Check browser cache

(2) Check the operating system cache, common ones such as hosts file

(3) Check router cache

(4) If it is not found in the first few steps, it will query the LDNS server of the ISP (Internet Service Provider)

(5) If the LDNS server is not found, it will request resolution from the root domain name server (Root Server), which is divided into the following steps:

  • The root server returns the address of the top-level domain name (TLD) server such as .com, .cn, .orgetc. In this example, .comthe address will be returned
  • Then send a request to the top-level domain name server, and then return the address of the secondary domain name (SLD) server. In this example, .testthe address returned
  • Then send a request to the secondary domain name server, and then return the target IP queried through the domain name. In this example, www.test.comthe address returned
  • Local DNS Server will cache the results and return them to the user, cached in the system

How CDN works:

(1) The process of users not using CDN to cache resources:

  1. The browser resolves the domain name through DNS (that is, the DNS resolution process above) and obtains the IP address corresponding to the domain name in turn.
  2. The browser sends a data request to the service host of the domain name based on the obtained IP address.
  3. The server returns response data to the browser

(2) The process of users using CDN to cache resources:

  1. For the URL of the clicked data, after parsing by the local DNS system, it is found that the URL corresponds to a CDN-dedicated DNS server. The DNS system will hand over the domain name resolution rights to the CDN-dedicated DNS server pointed to by the CNAME.
  2. The CND dedicated DNS server returns CND’s global load balancing device IP address to the user.
  3. Users initiate data requests to CDN’s global load balancing device
  4. The CDN's global load balancing device selects a regional load balancing device in the user's region based on the user's IP address and the content URL requested by the user, and tells the user to initiate a request to this device.
  5. The regional load balancing device selects a suitable cache server to provide services and returns the IP address of the cache server to the global load balancing device.
  6. The global load balancing device returns the server's IP address to the user
  7. The user initiates a request to the cache server, and the cache server responds to the user's request and sends the content required by the user to the user terminal.

If the cache server does not have the content the user wants, the cache server will request the content from its upper-level cache server, and so on, until the required resources are obtained. In the end, if it still doesn't exist, it will go back to its own server to get the resources.

image

CNAME (meaning: alias): In domain name resolution, the IP address corresponding to the specified domain name is actually parsed, or a CNAME of the domain name, and then the corresponding IP address is found based on this CNAME.

4. CDN usage scenarios

  • Use a third-party CDN service. If you want to open source some projects, you can use a third-party CDN service.
  • Use CDN to cache static resources and place the static resources of your website on CDN, such as js, css, pictures, etc. The entire project can be placed on CDN to complete one-click deployment.
  • Live broadcast transmission Live broadcast essentially uses streaming media for transmission, and CDN also supports streaming media transmission, so live broadcast can completely use CDN to improve access speed. CDN processes streaming media differently from ordinary static files. If an ordinary file is not found at the edge node, it will go to the upper layer to find it. However, the amount of data in the streaming media itself is very large. If you use the back-to-source This method will inevitably bring about performance problems, so streaming media generally uses active push.

2. Lazy loading

1. The concept of lazy loading

Lazy loading is also called delayed loading and on-demand loading. It refers to delaying the loading of image data in long web pages. It is a better way to optimize web page performance. In a relatively long web page or application, if there are many pictures, all the pictures will be loaded, and the user can only see that part of the picture data in the visual window, which wastes performance.

If you use lazy loading of images, you can solve the above problem. Images outside the visual area will not be loaded before the screen is scrolled, and will only be loaded when the screen is scrolled. This makes web pages load faster and reduces the load on the server. Lazy loading is suitable for scenarios where there are many pictures and a long page list (long list).

2. Characteristics of lazy loading

  • Reduce the loading of useless resources : Using lazy loading significantly reduces the pressure and traffic on the server, and also reduces the burden on the browser.
  • Improve user experience : If you load more images at the same time, you may need to wait for a longer time, which affects the user experience. Using lazy loading can greatly improve the user experience.
  • Prevent loading of too many images from affecting the loading of other resource files : This will affect the normal use of website applications.

3. Implementation principle of lazy loading

The loading of images is srccaused by. When srca value is assigned, the browser will request the image resource. Based on this principle, we use HTML5 data-xxxattributes to store the path of the image. When we need to load the image, we data-xxxassign the path to the image src. This achieves on-demand loading of images, that is, lazy loading.

Note: data-xxxin xxxcan be customized, here we use data-srcto define.

The key point of implementing lazy loading is to determine which image the user needs to load. In the browser, the resources in the visible area are the resources the user needs. So when the picture appears in the visible area, just get the real address of the picture and assign it to the picture.

Use native JavaScript to implement lazy loading:

Knowledge points:

(1) window.innerHeightis the height of the browser’s visual area

(2) document.body.scrollTop || document.documentElement.scrollTopIt is the distance that the browser scrolls

(3) imgs.offsetTopis the height from the top of the element to the top of the document (including the distance of the scroll bar)

(4) Image loading conditions:img.offsetTop < window.innerHeight + document.body.scrollTop;

Illustration:

image

Code:

<div class="container">
     <img src="loading.gif"  data-src="pic.png">
     <img src="loading.gif"  data-src="pic.png">
     <img src="loading.gif"  data-src="pic.png">
     <img src="loading.gif"  data-src="pic.png">
     <img src="loading.gif"  data-src="pic.png">
     <img src="loading.gif"  data-src="pic.png">
</div>
<script>
var imgs = document.querySelectorAll('img');
function lozyLoad(){
        var scrollTop = document.body.scrollTop || document.documentElement.scrollTop;
        var winHeight= window.innerHeight;
        for(var i=0;i < imgs.length;i++){
            if(imgs[i].offsetTop < scrollTop + winHeight ){
                imgs[i].src = imgs[i].getAttribute('data-src');
            }
        }
    }
  window.onscroll = lozyLoad();
</script>

4. The difference between lazy loading and preloading

Both of these methods are ways to improve the performance of web pages. The main difference between the two is that one loads early and the other loads slowly or even not at all. Lazy loading has a certain effect on relieving the pressure on the server front-end, while preloading will increase the pressure on the server front-end.

  • Lazy loading is also called delayed loading. It refers to the delay in loading images in long web pages. When users need to access them, they will be loaded again. This can improve the loading speed of the first screen of the website, improve the user experience, and reduce the load on the server. pressure. It is suitable for e-commerce websites with many pictures and long pages. The implementation principle of lazy loading is to set the src attribute of the image on the page to an empty string, save the real path of the image in a custom attribute, and when the page scrolls, it will be judged if the image enters the visible area of ​​the page. Within, the real path is taken from the custom attribute and assigned to the src attribute of the image to achieve lazy loading of the image.
  • Preloading refers to requesting the required resources to be loaded locally in advance, so that the resources can be fetched directly from the cache when needed later. Preloading can reduce the user's waiting time and improve the user experience. The most common way I know about preloading is to use the image object in js and set the scr attribute for the image object to achieve image preloading.

3. Reflow and redraw

1. Concepts and triggering conditions of reflow and redraw

(1) Reflux

When the size, structure, or attributes of some or all elements in the rendering tree change, the process in which the browser re-renders part or all of the document is called reflow .

The following operations can cause reflow:

  • First rendering of the page
  • The browser window size changes
  • The content of the element changes
  • The size or position of the element changes
  • The font size of an element changes
  • Activate CSS pseudo-classes
  • Query some properties or call some methods
  • Add or remove visible DOM elements

When reflow (rearrangement) is triggered, since the browser renders the page based on fluid layout, when reflow is triggered, it will cause the surrounding DOM elements to be rearranged. Its scope of influence has two types:

  • Global scope: Relayout the entire rendering tree starting from the root node
  • Local scope: Relayout a certain part of the rendering tree or a rendering object

(2) Redraw

When the style of some elements on the page changes, but does not affect its position in the document flow, the browser will redraw the elements. This process is called redrawing .

The following operations can cause reflow:

  • Color, background related properties: background-color, background-image, etc.
  • outline related attributes: outline-color, outline-width, text-decoration
  • border-radius、visibility、box-shadow

Note: When reflow is triggered, redrawing will definitely be triggered, but redrawing does not necessarily trigger reflow.

2. How to avoid reflow and redraw?

Measures to reduce reflow and redraw:

  • When operating DOM, try to operate on low-level DOM nodes.
  • Don't use tablelayouts, a small change may cause the entire tablelayout to be re-layout
  • Expressions using CSS
  • Do not frequently manipulate the style of elements. For static pages, you can modify the class name instead of the style.
  • Use absolute or fixed to remove elements from the document flow so that changes to them do not affect other elements.
  • To avoid frequently manipulating the DOM, you can create a document fragment documentFragment, apply all DOM operations to it, and finally add it to the document.
  • Set the element first display: noneand then display it after the operation is completed. Because DOM operations performed on elements with a display attribute of none will not cause reflow and redrawing.
  • Put multiple read operations (or write operations) of the DOM together instead of interspersing read and write operations with writes. This is due to the browser's rendering queue mechanism .

The browser has optimized itself for page reflow and redrawing - rendering queue

The browser will put all reflow and redraw operations in a queue. When the operations in the queue reach a certain number or a certain time interval, the browser will batch the queue. This will turn multiple reflows and redraws into one reflow and redraw.

Above, when multiple read operations (or write operations) are put together, they will be executed after all read operations enter the queue. In this way, instead of triggering multiple reflows, only one reflow is triggered.

3. How to optimize animation?

As for how to optimize animation, we know that in general, animation requires frequent DOM manipulation, which will cause performance problems on the page. We can set the properties of the animation to positionor absoluteseparate fixedthe animation from the document flow, so that its reflow will not It will affect the page.

4. What is documentFragment? What is the difference between using it and directly manipulating the DOM?

MDN documentFragmentexplanation of:

DocumentFragment, document fragment interface, a minimal document object without a parent object. It is used as a lightweight version of Document, just like a standard document, storing a document structure composed of nodes. Compared with document, the biggest difference is that DocumentFragment is not part of the real DOM tree. Its changes will not trigger the re-rendering of the DOM tree and will not cause performance and other problems.

When we insert a DocumentFragment node into the document tree, what is inserted is not the DocumentFragment itself, but all its descendant nodes. During frequent DOM operations, we can insert DOM elements into DocumentFragment, and then insert all descendant nodes into the document at once. Compared with directly operating the DOM, inserting a DocumentFragment node into the DOM tree will not trigger redrawing of the page, which greatly improves the performance of the page.

4. Throttling and anti-shake

1. Understanding of throttling and anti-shake

  • Function anti-shake means to execute the callback n seconds after the event is triggered. If the event is triggered again within these n seconds, the time will be restarted. This can be used on some click request events to avoid sending multiple requests to the backend due to multiple user clicks.
  • Function throttling refers to specifying a unit time. Within this unit time, the callback function that triggers the event can only be executed once. If an event is triggered multiple times in the same unit time, only one time can take effect. Throttling can be used in the event listening of the scroll function to reduce the frequency of event calls through event throttling.

Application scenarios of anti-shake function:

  • Button submission scenario: Prevent multiple button submissions and only execute the last submission
  • Server-side verification scenario: Form verification requires the cooperation of the server. It only executes the last time of a continuous input event. There is also a similar search function for associated words in the living environment. Please use lodash.debounce.

The most applicable scenarios for throttling functions:

  • Drag and drop scenario: only executed once within a fixed period of time to prevent excessively frequent triggering of position changes.
  • Zoom scenario: Monitor browser resize
  • Animation scene: avoid performance problems caused by triggering animation multiple times in a short period of time

2. Implement throttling function and anti-shake function

Implementation of function anti-shake:

function debounce(fn, wait) {
  var timer = null;

  return function() {
    var context = this,
      args = [...arguments];

    // 如果此时存在定时器的话,则取消之前的定时器重新记时
    if (timer) {
      clearTimeout(timer);
      timer = null;
    }

    // 设置定时器,使事件间隔指定事件后执行
    timer = setTimeout(() => {
      fn.apply(context, args);
    }, wait);
  };
}

Implementation of function throttling:

// 时间戳版
function throttle(fn, delay) {
  var preTime = Date.now();

  return function() {
    var context = this,
      args = [...arguments],
      nowTime = Date.now();

    // 如果两次时间间隔超过了指定时间,则执行函数。
    if (nowTime - preTime >= delay) {
      preTime = Date.now();
      return fn.apply(context, args);
    }
  };
}

// 定时器版
function throttle (fun, wait){
  let timeout = null
  return function(){
    let context = this
    let args = [...arguments]
    if(!timeout){
      timeout = setTimeout(() => {
        fun.apply(context, args)
        timeout = null 
      }, wait)
    }
  }
}

5. Image optimization

1. How to optimize the pictures in the project?

  1. No pictures required. Many times, many modified images are used. In fact, these modified images can be replaced by CSS.
  2. For mobile devices, the screen width is only that small, so there is no need to waste bandwidth by loading the original image. Generally, images are loaded using CDN, which can calculate the width to fit the screen, and then request the corresponding cropped image.
  3. Small images use base64 format
  4. Consolidate multiple icon files into one image (Sprite image)
  5. Choose the right image format:
    • For browsers that can display the WebP format, try to use the WebP format. Because the WebP format has a better image data compression algorithm, it can bring smaller image size, and it has image quality that is indistinguishable to the naked eye. The disadvantage is that the compatibility is not good.
    • The small pictures use PNG. In fact, for most icons and other pictures, SVG can be used instead.
    • Photos using JPEG

2. Common image formats and usage scenarios

(1) BMP is a lossless bitmap that supports both indexed and direct colors. This image format performs almost no data compression, so images in BMP format are usually larger files.

(2) GIF is a lossless bitmap using indexed colors. Encoded using LZW compression algorithm. The small file size is the advantage of the GIF format. At the same time, the GIF format also has the advantages of supporting animation and transparency. However, the GIF format only supports 8-bit indexed colors, so the GIF format is suitable for scenes that do not have high color requirements and require a small file size.

(3) JPEG is a lossy, direct-color bitmap. The advantage of JPEG pictures is that they use direct colors. Thanks to richer colors, JPEG is very suitable for storing photos. Compared with GIF, JPEG is not suitable for storing corporate logos and wireframe pictures. Because lossy compression will cause the picture to be blurry, and the selection of direct colors will cause the picture file to be larger than GIF.

(4) PNG-8 is a lossless bitmap using indexed colors. PNG is a relatively new image format, and PNG-8 is a very good substitute for the GIF format. When possible, PNG-8 should be used instead of GIF, because under the same image effect, PNG- 8 has smaller file size. In addition, PNG-8 also supports transparency adjustment, while GIF does not. Unless support for animation is required, there is no reason to use GIF instead of PNG-8.

(5) PNG-24 is a lossless bitmap using direct color. The advantage of PNG-24 is that it compresses the image data, making the file size of the PNG-24 format much smaller than BMP for the same effect. Of course, PNG24 images are still much larger than JPEG, GIF, and PNG-8.

(6) SVG is a lossless vector image. SVG is a vector graphic meaning an SVG image consists of straight lines and curves and a method of drawing them. When you zoom in on an SVG image, you still see lines and curves, but no pixels. This means that SVG images will not be distorted when enlarged, so it is very suitable for drawing logos, icons, etc.

(7) WebP is a new image format developed by Google. WebP is a bitmap that supports both lossy and lossless compression and uses direct color. It can be seen from the name that it is born for the Web. What does it mean to be born for the Web? That is to say, for pictures of the same quality, WebP has a smaller file size. Nowadays, websites are filled with a large number of pictures. If the file size of each picture can be reduced, the amount of data transmission between the browser and the server will be greatly reduced, thereby reducing the access delay and improving the access experience. Currently, only the Chrome browser and Opera browser support the WebP format, and the compatibility is not very good.

  • In the case of lossless compression, the file size of WebP images of the same quality is 26% smaller than that of PNG;
  • In the case of lossy compression, the file size of WebP images with the same image accuracy is 25%~34% smaller than that of JPEG;
  • The WebP image format supports image transparency. A lossless compressed WebP image requires only 22% of the additional file size to support transparency.

6. Webpack optimization

1. How to improve the packaging speed of webpack?

(1) Optimize Loader

For Loader, Babel must bear the brunt of the impact on packaging efficiency. Because Babel will convert the code into a string to generate an AST, then continue to transform the AST and finally generate new code. The larger the project, the more code will be converted, and the lower the efficiency will be . Of course, this can be optimized.

First, we optimize the file search range of Loader

module.exports = {
  module: {
    rules: [
      {
        // js 文件才使用 babel
        test: /\.js$/,
        loader: 'babel-loader',
        // 只在 src 文件夹下查找
        include: [resolve('src')],
        // 不会去查找的路径
        exclude: /node_modules/
      }
    ]
  }
}

For Babel, I hope it only works on JS code, and node_modulesthe code used in is compiled, so there is no need to process it again.

Of course, this is not enough. You can also cache the files compiled by Babel . Next time, you only need to compile the changed code files, which can greatly speed up the packaging time.

loader: 'babel-loader?cacheDirectory=true'

(2)HappyPack

Due to the fact that Node is run in a single thread, Webpack is also single-threaded during the packaging process. Especially when executing the Loader, there are many long-term compilation tasks, which will lead to waiting.

HappyPack can convert the synchronous execution of Loader into parallel , so that system resources can be fully utilized to speed up packaging efficiency.

module: {
  loaders: [
    {
      test: /\.js$/,
      include: [resolve('src')],
      exclude: /node_modules/,
      // id 后面的内容对应下面
      loader: 'happypack/loader?id=happybabel'
    }
  ]
},
plugins: [
  new HappyPack({
    id: 'happybabel',
    loaders: ['babel-loader?cacheDirectory'],
    // 开启 4 个线程
    threads: 4
  })
]

(3)DllPlugin

DllPlugin can package specific class libraries in advance and then introduce them . This method can greatly reduce the number of times the class library is packaged. Only when the class library is updated does it need to be repackaged, and it also realizes the optimization solution of extracting public code into separate files. How to use DllPlugin is as follows:

// 单独配置在一个文件中
// webpack.dll.conf.js
const path = require('path')
const webpack = require('webpack')
module.exports = {
  entry: {
    // 想统一打包的类库
    vendor: ['react']
  },
  output: {
    path: path.join(__dirname, 'dist'),
    filename: '[name].dll.js',
    library: '[name]-[hash]'
  },
  plugins: [
    new webpack.DllPlugin({
      // name 必须和 output.library 一致
      name: '[name]-[hash]',
      // 该属性需要与 DllReferencePlugin 中一致
      context: __dirname,
      path: path.join(__dirname, 'dist', '[name]-manifest.json')
    })
  ]
}

Then you need to execute this configuration file to generate dependency files, and then you need to DllReferencePluginintroduce the dependency files into the project using

// webpack.conf.js
module.exports = {
  // ...省略其他配置
  plugins: [
    new webpack.DllReferencePlugin({
      context: __dirname,
      // manifest 就是之前打包出来的 json 文件
      manifest: require('./dist/vendor-manifest.json'),
    })
  ]
}

(4) Code compression

In Webpack3, generally use UglifyJSto compress code, but this is run in a single thread. In order to speed up efficiency, you can use webpack-parallel-uglify-pluginto run in parallel UglifyJS, thereby improving efficiency.

In Webpack4, these operations are no longer needed. You only need to modeset to productionto enable the above functions by default. Code compression is also a must-do performance optimization solution for us. Of course, we can not only compress JS code, but also HTML and CSS code. In the process of compressing JS code, we can also implement functions such as deleting such code through configuration console.log.

(5) Others

Packaging speed can be accelerated through some small optimization points

  • resolve.extensions: Used to indicate the file suffix list. The default search order is ['.js', '.json']. If your imported file does not add a suffix, the files will be searched in this order. We should reduce the length of the suffix list as much as possible, and then rank the suffixes with high frequency at the front
  • resolve.alias: You can map a path through an alias, allowing Webpack to find the path faster.
  • module.noParse: If you are sure that there are no other dependencies under a file, you can use this attribute to prevent Webpack from scanning the file. This method is very helpful for large class libraries.

2. How to reduce Webpack packaging size

(1) Load on demand

When developing a SPA project, there will be many routing pages in the project. If all these pages are packaged into one JS file, although multiple requests are merged, a lot of unnecessary code will also be loaded, which will take longer. In order for the home page to be presented to users faster, it is hoped that the file size that can be loaded on the home page should be as small as possible. At this time, you can use on-demand loading to package each routing page into a separate file . Of course, not only routes can be loaded on demand, loadashthis function can also be used for such large class libraries.

The code implementation of on-demand loading will not be discussed in detail here, because the implementation is different due to the different frameworks used. Of course, although their usage may be different, the underlying mechanism is the same. When used, the corresponding file is downloaded and one is returned Promise. When Promisesuccessful, the callback is executed.

(2)Scope Hoisting

Scope Hoisting will analyze the dependencies between modules and merge the packaged modules into one function as much as possible.

For example, if you want to package two files:

// test.js
export const a = 1
// index.js
import { a } from './test.js'

In this case, the packaged code will look like this:

[
  /* 0 */
  function (module, exports, require) {
    //...
  },
  /* 1 */
  function (module, exports, require) {
    //...
  }
]

But if you use Scope Hoisting, the code will be merged into a function as much as possible, and it will become similar to this code:

[
  /* 0 */
  function (module, exports, require) {
    //...
  }
]

The code generated by this packaging method is obviously much less than before. If you want to enable this feature in Webpack 4, just enable optimization.concatenateModulesit:

module.exports = {
  optimization: {
    concatenateModules: true
  }
}

(3)Tree Shaking

Tree Shaking can delete unreferenced code in the project , such as:

// test.js
export const a = 1
export const b = 2
// index.js
import { a } from './test.js'

For the above situation, testthe variables in the file bwill not be packaged into the file if they are not used in the project.

If you use Webpack 4, this optimization function will be automatically activated when you open the production environment.

3. How to use webpack to optimize front-end performance?

Using webpack to optimize front-end performance means optimizing the output of webpack so that the final packaged result runs quickly and efficiently in the browser.

  • Compress code : delete redundant code, comments, simplify code writing, etc. You can use webpack's UglifyJsPlugin and ParallelUglifyPlugin to compress JS files, and use cssnano (css-loader?minimize) to compress css.
  • Utilize CDN acceleration : During the build process, modify the referenced static resource path to the corresponding path on the CDN. You can use webpack's output parameter and the publicPath parameter of each loader to modify the resource path.
  • Tree Shaking : Delete sections of code that will never be seen. This can be achieved by appending the parameter --optimize-minimize when starting webpack
  • Code Splitting: Divide the code into chunks according to routing dimensions or components, so that it can be loaded on demand and the browser cache can be fully utilized.
  • Extract public third-party libraries : SplitChunksPlugin plug-in to extract public modules, and use browser cache to cache these public codes that do not need to change frequently for a long time.

4. How to improve the build speed of webpack ?

  1. In the case of multiple entrances, use CommonsChunkPlugin to extract common code
  2. Extract common libraries through externals configuration
  3. Use DllPlugin and DllReferencePlugin precompiled resource modules to precompile those npm packages that we reference but will never modify through DllPlugin, and then load the precompiled modules through DllReferencePlugin.
  4. Use Happypack to achieve multi-threaded accelerated compilation
  5. Use webpack-uglify-parallel to improve the compression speed of uglifyPlugin. In principle, webpack-uglify-parallel uses multi-core parallel compression to improve the compression speed.
    The publicPath parameter of the loader is used to modify the resource path.
  • Tree Shaking : Delete sections of code that will never be seen. This can be achieved by appending the parameter --optimize-minimize when starting webpack
  • Code Splitting: Divide the code into chunks according to routing dimensions or components, so that it can be loaded on demand and the browser cache can be fully utilized.
  • Extract public third-party libraries : SplitChunksPlugin plug-in to extract public modules, and use browser cache to cache these public codes that do not need to change frequently for a long time.

4. How to improve the build speed of webpack ?

  1. In the case of multiple entrances, use CommonsChunkPlugin to extract common code
  2. Extract common libraries through externals configuration
  3. Use DllPlugin and DllReferencePlugin precompiled resource modules to precompile those npm packages that we reference but will never modify through DllPlugin, and then load the precompiled modules through DllReferencePlugin.
  4. Use Happypack to achieve multi-threaded accelerated compilation
  5. Use webpack-uglify-parallel to improve the compression speed of uglifyPlugin. In principle, webpack-uglify-parallel uses multi-core parallel compression to improve compression speed.
  6. Use Tree-shaking and Scope Hoisting to eliminate redundant code

Guess you like

Origin blog.csdn.net/m0_46374969/article/details/132455539