Nine Performance Optimization Tips for Vue.js

1、Functional components

For the first trick, functional components, you can check out this live example .

The component code before optimization is as follows:

<template>
  <div class="cell">
    <div v-if="value" class="on"></div>
    <section v-else class="off"></section>
  </div>
</template>

<script>
export default {
  props: ['value'],
}
</script>

The optimized component code is as follows:

<template functional>
  <div class="cell">
    <div v-if="props.value" class="on"></div>
    <section v-else class="off"></section>
  </div>
</template>

Then we render 800 components before and after optimization in each parent component, and trigger component updates by modifying data within each frame, open Chrome's Performance panel to record their performance, and get the following results.

Before optimization:

Optimized:

Comparing these two figures, we can see that the scriptexecution is longer than that after optimization, and we know that the JS engine is a single-threaded operating mechanism, and the JS thread will block the UI thread, so when the script execution time is too long, it will Block rendering, causing the page to freeze. And the optimized scriptexecution time is short, so its performance is better.

So, why is the execution time shorter with functional component JS? This starts with the implementation principle of functional components. You can understand it as a function that can render and generate a piece of DOM according to the context data you pass.

Functional components are different from ordinary object-type components. It will not be regarded as a real component. We know that in patchthe process if a node is a component vnode, it will recursively execute the initialization process of sub-components; while functional components The rendergenerated is normal vnode, there will be no process of recursive subcomponents, so the rendering overhead will be much lower.

Therefore, functional components will also have no state, no responsive data, and life cycle hook functions. You can think of it as stripping part of the DOM from the common component template and rendering it through a function, which is a kind of reuse at the DOM level.

2、Child component splitting

For the second trick, subcomponent splitting, you can check out this online example .

The component code before optimization is as follows:

<template>
  <div :style="{ opacity: number / 300 }">
    <div>{
   
   { heavy() }}</div>
  </div>
</template>

<script>
export default {
  props: ['number'],
  methods: {
    heavy () {
      const n = 100000
      let result = 0
      for (let i = 0; i < n; i++) {
        result += Math.sqrt(Math.cos(Math.sin(42)))
      }
      return result
    }
  }
}
</script>

The optimized component code is as follows:

<template>
  <div :style="{ opacity: number / 300 }">
    <ChildComp/>
  </div>
</template>

<script>
export default {
  components: {
    ChildComp: {
      methods: {
        heavy () {
          const n = 100000
          let result = 0
          for (let i = 0; i < n; i++) {
            result += Math.sqrt(Math.cos(Math.sin(42)))
          }
          return result
        },
      },
      render (h) {
        return h('div', this.heavy())
      }
    }
  },
  props: ['number']
}
</script>

Then we render 300 components before and after optimization in each parent component, and trigger component updates by modifying data within each frame, open Chrome's Performance panel to record their performance, and get the following results.

Before optimization:

Optimized:

Comparing these two figures, we can see that the scriptexecution is significantly less than that before optimization, so the performance experience is better.

So why is there a difference? Let's look at the component before optimization. The example simulates a time-consuming task through a heavyfunction , and this function will be executed every time it is rendered, so each rendering of the component will consume a long time time to execute JavaScript.

The optimized way is heavyto ChildCompencapsulate the execution logic of this time-consuming task function with subcomponents. Since the update of Vue is at the granularity of components, although each frame will cause the re-rendering of the parent component through data modification, it ChildCompwill re-renders because it doesn't have any responsive data changes internally either. Therefore, the optimized component will not perform time-consuming tasks every time it is rendered, and the JavaScript execution time will naturally be reduced.

However, I have put forward some different opinions on this optimization method. For details, you can click on this issue . I think that computing properties for optimization in this scenario is better than splitting subcomponents. Thanks to the caching feature of computed properties, time-consuming logic will only be executed on the first rendering, and there is no additional overhead for rendering subcomponents when using computed properties.

In actual work, there are many scenarios where computing properties are used to optimize performance. After all, it also reflects an optimization idea of ​​exchanging space for time.

3、Local variables

The third trick, local variables, you can check out this online example .

The component code before optimization is as follows:

<template>
  <div :style="{ opacity: start / 300 }">{
   
   { result }}</div>
</template>

<script>
export default {
  props: ['start'],
  computed: {
    base () {
      return 42
    },
    result () {
      let result = this.start
      for (let i = 0; i < 1000; i++) {
        result += Math.sqrt(Math.cos(Math.sin(this.base))) + this.base * this.base + this.base + this.base * 2 + this.base * 3
      }
      return result
    },
  },
}
</script>

The optimized component code is as follows:

<template>
  <div :style="{ opacity: start / 300 }">{
   
   { result }}</div>
</template>

<script>
export default {
  props: ['start'],
  computed: {
    base () {
      return 42
    },
    result ({ base, start }) {
      let result = start
      for (let i = 0; i < 1000; i++) {
        result += Math.sqrt(Math.cos(Math.sin(base))) + base * base + base + base * 2 + base * 3
      }
      return result
    },
  },
}
</script>

Then we render 300 components before and after optimization in each parent component, and trigger component updates by modifying data within each frame, open Chrome's Performance panel to record their performance, and get the following results.

Before optimization:

Optimized:

Comparing these two figures, we can see that the scriptexecution is significantly less than that before optimization, so the performance experience is better.

This is mainly due to the difference in the implementation resultof . The components before optimization are accessed many times during the calculation process this.base, while the optimized components will use local variables before calculation base, cache this.base, and then directly access them later base.

So why does this difference cause a difference in performance? The reason is that every time you this.basevisit , since this.baseis a responsive object, it will be triggered getter, and then the logic code related to dependency collection will be executed. If similar logic is executed too much, as in the example, hundreds of cycles update hundreds of components, each component triggers computedrecalculation , and then executes dependency collection related logic multiple times, the performance will naturally drop.

In terms of requirements, this.baseit is enough to execute dependency collection once, and return its getterevaluation result to a local variable. It will not be triggered when it isbase accessed again later, and the logic of dependency collection will not be followed, and the performance will naturally be improved. .basegetter

This is a very useful performance optimization technique. Because when many people develop Vue.js projects, they habitually this.xxxwrite , because most people don't notice this.xxxwhat is done behind the access. When the number of visits is small, the performance problem is not prominent, but once the number of visits increases, such as multiple visits in a large loop, similar to the example scenario, performance problems will occur.

When I was optimizing the performance of ZoomUI’s Table component, render table bodyI used local variable optimization techniques and wrote a benchmark for performance comparison: when rendering a 1000 * 10 table, the performance of re-rendering the updated data of the ZoomUI Table should be It has nearly doubled the performance of ElementUI's Table.

4、Reuse DOM with v-show

The fourth trick, using v-showReuse DOM, you can check out this online example .

The component code before optimization is as follows:

<template functional>
  <div class="cell">
    <div v-if="props.value" class="on">
      <Heavy :n="10000"/>
    </div>
    <section v-else class="off">
      <Heavy :n="10000"/>
    </section>
  </div>
</template>

The optimized component code is as follows:

<template functional>
  <div class="cell">
    <div v-show="props.value" class="on">
      <Heavy :n="10000"/>
    </div>
    <section v-show="!props.value" class="off">
      <Heavy :n="10000"/>
    </section>
  </div>
</template>

Then we render 200 components before and after optimization in each parent component, and trigger component updates by modifying data within each frame, and open Chrome's Performance panel to record their performance, and get the following results.

Before optimization:

Optimized:

Comparing these two figures, we can see that the scriptexecution is significantly less than that before optimization, so the performance experience is better.

The main difference before and after optimization is that v-showthe command instead of v-ifthe command to replace the explicitness of the component. Although it is similarv-show to in terms of performance, it controls the explicitness of the component, but there is still a big gap in the internal implementation.v-if

v-ifInstructions will be compiled into a ternary operator and conditional rendering during the compilation phase. For example, the component template before optimization is compiled to generate the following rendering function:

function render() {
  with(this) {
    return _c('div', {
      staticClass: "cell"
    }, [(props.value) ? _c('div', {
      staticClass: "on"
    }, [_c('Heavy', {
      attrs: {
        "n": 10000
      }
    })], 1) : _c('section', {
      staticClass: "off"
    }, [_c('Heavy', {
      attrs: {
        "n": 10000
      }
    })], 1)])
  }
}

When the value props.valueof changes, it will trigger the update of the corresponding component. For v-ifthe rendered node, because the old and new nodes vnodeare inconsistent , during the comparison process of the core diff algorithm, the old vnodenode will be removed and a new vnodenode will be created. The new Heavycomponent will go through the process of Heavycomponent initialization, rendering vnode, patchand so on.

Therefore, v-ifevery time a component is updated, a new Heavysubcomponent will be created. When there are many updated components, it will naturally cause performance pressure.

And when we use v-showthe directive , the optimized component template is compiled to generate the following rendering function:

function render() {
  with(this) {
    return _c('div', {
      staticClass: "cell"
    }, [_c('div', {
      directives: [{
        name: "show",
        rawName: "v-show",
        value: (props.value),
        expression: "props.value"
      }],
      staticClass: "on"
    }, [_c('Heavy', {
      attrs: {
        "n": 10000
      }
    })], 1), _c('section', {
      directives: [{
        name: "show",
        rawName: "v-show",
        value: (!props.value),
        expression: "!props.value"
      }],
      staticClass: "off"
    }, [_c('Heavy', {
      attrs: {
        "n": 10000
      }
    })], 1)])
  }
}

When the value props.valueof changes, it will trigger the update of the corresponding component. For v-showthe rendered nodes, since the old and the new vnodeare consistent , they only need patchVnodeto be all the time. Then how does it make the DOM nodes show and hide?

It turns out that in patchVnodethe process , v-showthe hook function corresponding to the instruction will be executed internally update, and then it v-showwill set style.displaythe value of the DOM element it acts on to control the display and hiding according to the value bound by the instruction.

Therefore, compared to v-ifconstantly deleting and creating new DOM functions, v-showit only updates the explicit and implicit values ​​of the existing DOM, so v-showthe overhead v-ifof is much smaller than that of , and the more complex the internal DOM structure is, the greater the performance difference will be.

But v-showcompared to v-ifthe performance advantage of is in the update phase of the component, if it is only in the initialization phase, v-ifthe performance is higher than v-showthat, because it only renders one branch, and v-showrenders both branches, and style.displaycontrols the correspondence through The visibility of the DOM.

When v-showusing , all internal components of the branch will be rendered, and the corresponding lifecycle hook functions will be executed, v-ifwhile when using , components inside the unhit branch will not be rendered, and the corresponding lifecycle hook functions will not be executed .

Therefore, you need to understand their principles and differences in order to use appropriate instructions in different scenarios.

5、KeepAlive

The fifth trick, using KeepAlivecomponents to cache the DOM, you can check out this online example .

The component code before optimization is as follows:

<template>
  <div id="app">
    <router-view/>
  </div>
</template>

The optimized component code is as follows:

<template>
  <div id="app">
    <keep-alive>
      <router-view/>
    </keep-alive>
  </div>
</template>

When we click the button to switch between Simple page and Heavy Page, different views will be rendered, and the rendering of Heavy Page is very time-consuming. We open Chrome's Performance panel to record their performance, and then perform the above operations before and after optimization, and we will get the following results.

Before optimization:

Optimized:

Comparing these two figures, we can see that the scriptexecution is significantly less than that before optimization, so the performance experience is better.

In the non-optimized scenario, every time we click the button to switch the routing view, the component will be re-rendered, and the rendered component will go through the component initialization, , renderand patchother processes. If the component is more complicated or nested deeply, the entire rendering time will be reduced. will be very long.

KeepAliveAfter using , KeepAliveafter the first rendering of the wrapped component, the vnodeand DOM will be cached, and then when the component is rendered again next time, the corresponding vnodeand DOM will be directly obtained from the cache, and then rendered, There is no need to go through a series of processes such as component initialization renderand patchetc. , which reduces scriptthe execution time and improves performance.

But using KeepAlivecomponents is not without cost, because it will take up more memory for caching, which is a typical application of space-for-time optimization ideas.

6、Deferred features

The sixth technique, using Deferredcomponents to render components in batches in a delayed manner, you can check out this online example .

The component code before optimization is as follows:

<template>
  <div class="deferred-off">
    <VueIcon icon="fitness_center" class="gigantic"/>

    <h2>I'm an heavy page</h2>

    <Heavy v-for="n in 8" :key="n"/>

    <Heavy class="super-heavy" :n="9999999"/>
  </div>
</template>

The optimized component code is as follows:

<template>
  <div class="deferred-on">
    <VueIcon icon="fitness_center" class="gigantic"/>

    <h2>I'm an heavy page</h2>

    <template v-if="defer(2)">
      <Heavy v-for="n in 8" :key="n"/>
    </template>

    <Heavy v-if="defer(3)" class="super-heavy" :n="9999999"/>
  </div>
</template>

<script>
import Defer from '@/mixins/Defer'

export default {
  mixins: [
    Defer(),
  ],
}
</script>

When we click the button to switch between Simple page and Heavy Page, different views will be rendered, and the rendering of Heavy Page is very time-consuming. We open Chrome's Performance panel to record their performance, and then perform the above operations before and after optimization, and we will get the following results.

Before optimization:

Optimized:

Comparing these two pictures, we can find that when we switch from Simple Page to Heavy Page before optimization, when a Render is near the end, the page is still rendered as Simple Page, which will give people a feeling of page freeze. After optimization, when we switch from the Simple Page to the Heavy Page, the Heavy Page has already been rendered at the front of the Render once, and the Heavy Page is rendered progressively.

The difference before and after optimization is mainly because the latter uses Deferthis mixin, so how does it work, let's find out:

export default function (count = 10) {
  return {
    data () {
      return {
        displayPriority: 0
      }
    },

    mounted () {
      this.runDisplayPriority()
    },

    methods: {
      runDisplayPriority () {
        const step = () => {
          requestAnimationFrame(() => {
            this.displayPriority++
            if (this.displayPriority < count) {
              step()
            }
          })
        }
        step()
      },

      defer (priority) {
        return this.displayPriority >= priority
      }
    }
  }
}

DeferThe main idea of ​​is to split one rendering of a component into multiple times. It maintains displayPriorityvariables , and then requestAnimationFrameincrements itself when rendering each frame, up to count. Then Defer mixinuse the inside of the component v-if="defer(xxx)"to displayPrioritycontrol xxxthe rendering of certain blocks when added to .

When you have components that take time to render, it’s a good idea to Deferreduse progressive rendering to avoid renderrendering stuck due to long JS execution time.

7、Time slicing

The seventh technique, using Time slicingthe time -slicing technique, you can check out this online example .

The code before optimization is as follows:

fetchItems ({ commit }, { items }) {
  commit('clearItems')
  commit('addItems', items)
}

The optimized code is as follows:

fetchItems ({ commit }, { items, splitCount }) {
  commit('clearItems')
  const queue = new JobQueue()
  splitArray(items, splitCount).forEach(
    chunk => queue.addJob(done => {
      // 分时间片提交数据
      requestAnimationFrame(() => {
        commit('addItems', chunk)
        done()
      })
    })
  )
  await queue.start()
}

We first create 10,000 pieces of fake data by clicking Genterate itemsthe button , and then Time-slicingclick Commit itemsthe button to submit the data when it is turned on and off, and open Chrome's Performance panel to record their performance, and the following results will be obtained.

Before optimization:

Optimized:

Comparing these two figures, we can find that the total scriptexecution is less than that after optimization, but from the actual look and feel, click the submit button before optimization, the page will freeze for about 1.2 seconds, after optimization , the page will not be completely stuck, but there will still be a feeling of rendering lag.

So why is the page stuck before optimization? Because too much data is submitted at one time, the internal JS execution time is too long, which blocks the UI thread and causes the page to freeze.

After optimization, the page still freezes because the granularity of the data we split is 1000. In this case, there is still pressure to re-render the component. We observe that the fps is only a dozen, and there will be a sense of freeze. Usually, as long as the fps of the page reaches 60, the page will be very smooth. If we change the data split granularity to 100, basically the fps can reach more than 50. Although the page rendering becomes smoother, the total submission of 10,000 data is completed The time is still getting longer.

Using Time slicing technology can avoid page stuck, usually we will add a loading effect when processing this time-consuming task, in this example, we can open it loading animation, and then submit the data. The comparison found that before optimization, due to too much data submitted at one time, JS has been running for a long time, blocking the UI thread, this loading animation will not be displayed, but after optimization, because we split it into multiple time slices to submit data, a single The JS runtime is shortened so that the loading animation has a chance to show.

One thing to note here is that although we use requestAnimationFramethe API to split the time slice, the use requestAnimationFrameitself cannot guarantee full-frame operation. requestAnimationFrameWhat is guaranteed is that the corresponding incoming callback function will be executed after each redraw of the browser. If you want to ensure full Frames can only make the running time of JS within a Tick not exceed 17ms.

8、Non-reactive data

The eighth tip, using Non-reactive datanon- responsive data, you can check out this online example .

The code before optimization is as follows:

const data = items.map(
  item => ({
    id: uid++,
    data: item,
    vote: 0
  })
)

The optimized code is as follows:

const data = items.map(
  item => optimizeItem(item)
)

function optimizeItem (item) {
  const itemData = {
    id: uid++,
    vote: 0
  }
  Object.defineProperty(itemData, 'data', {
    // Mark as non-reactive
    configurable: false,
    value: item
  })
  return itemData
}

Still the previous example, we first create 10,000 pieces of fake data by clicking Genterate itemsthe button , and then Partial reactivityclick Commit itemsthe button turned on and off respectively, and open Chrome's Performance panel to record their performance, and the following results will be obtained.

Before optimization:

Optimized:

Comparing these two figures, we can see that the scriptexecution is significantly less than that before optimization, so the performance experience is better.

The reason for this difference is that when the data is submitted internally, the newly submitted data will also be defined as responsive by default. If the sub-attribute of the data is in the form of an object, it will recursively make the sub-attribute also become responsive. Therefore, when submitting a lot of data, this process becomes a time-consuming process.

After optimization, we datamanually configurableto be , so that the object attribute array obtained falseinternally through walkwhen will be ignored , and it will not be this attribute , because it points to an object, so it will also be Reducing the logic of recursive responsiveness is equivalent to reducing the performance loss of this part. The larger the amount of data, the more obvious the effect of this optimization.Object.keys(obj)datadatadefineReactivedata

In fact, there are many other optimization methods like this. For example, some data we define in components may not necessarily be defined datain . We don't use some data in the template, and we don't need to monitor its changes. We just want to share this data in the context of the component. At this time, we can just mount the data thisto

export default {
  created() {
    this.scroll = null
  },
  mounted() {
    this.scroll = new BScroll(this.$el)
  }
}

This way we can share scrollthe object , even though it is not a reactive object.

9、Virtual scrolling

For the ninth tip, using the Virtual scrollingvirtual scroll component, you can check out this live example .

The code of the component before optimization is as follows:

<div class="items no-v">
  <FetchItemViewFunctional
    v-for="item of items"
    :key="item.id"
    :item="item"
    @vote="voteItem(item)"
  />
</div>

The optimized code is as follows:

<recycle-scroller
  class="items"
  :items="items"
  :item-size="24"
>
  <template v-slot="{ item }">
    <FetchItemView
      :item="item"
      @vote="voteItem(item)"
    />
  </template>
</recycle-scroller>

Still the previous example, we need to open it View list, and then click Genterate itemsthe button to create 10000 fake data (note that the online example can only create up to 1000 data, in fact, 1000 data does not reflect the optimization effect well, so I modified Source code limitation, running locally, created 10,000 pieces of data), and Unoptimizedthen RecycleScrollerclicking Commit itemsthe button to submit the data in and respectively, scrolling the page, opening Chrome's Performance panel to record their performance, and the following results will be obtained.

Before optimization:

Optimized:

Comparing these two pictures, we found that in the case of non-optimization, the fps of 10,000 pieces of data is only single digits in the case of scrolling, and only a dozen in the case of non-scrolling. The reason is that there are too many DOMs rendered in the non-optimized scene. The pressure itself is very high. After optimization, even with 10,000 pieces of data, the fps can be more than 30 in the case of scrolling, and can reach 60 full frames in the case of non-rolling.

The reason for this difference is that the implementation of virtual scrolling is to only render the DOM in the viewport, so that the total amount of DOM rendered is very small, and the natural performance will be much better.

The virtual scroll component is also written by Guillaume Chau , interested students can study its source code implementation . Its basic principle is to monitor scrolling events, dynamically update the DOM elements to be displayed, and calculate their displacement in the view.

The virtual scroll component is not without cost, because it needs to be calculated in real time during the scrolling process, so there will be a certain scriptexecution cost. Therefore, if the amount of data in the list is not very large, it is enough for us to use ordinary scrolling.

Summarize

Through this article, I hope you can understand the nine performance optimization techniques of Vue.js and apply them to actual development projects. In addition to the above techniques, there are also commonly used performance optimization methods such as lazy loading of images, lazy loading of components, and asynchronous components.

Before doing performance optimization, we need to analyze where the performance bottleneck is so that we can adapt to local conditions. In addition, performance optimization requires data support. Before you do any performance optimization, you need to collect the data before optimization, so that you can see the optimization effect through data comparison after optimization.

I hope that in the future development process, you will not only be satisfied with the implementation requirements, but also think about the performance impact it may have when writing each line of code.

This article was first published on the public account "Lao Huang's front-end private kitchen" , welcome to pay attention.

Author: Huang Yi
Link: https://juejin.cn/post/6922641008106668045
Source: Rare Earth Nuggets
The copyright belongs to the author. For commercial reprint, please contact the author for authorization, for non-commercial reprint, please indicate the source.

Guess you like

Origin blog.csdn.net/maxue20161025/article/details/128100068