Performance optimization of DOM operations

        During the development process, you will more or less encounter situations where you need to operate DOM, and DOM operations will consume some performance. So today we will take a look at some performance optimization methods when operating DOM:

1. Choose a method with better performance to obtain DOM elements

First, let's take a look at how to use js to get dom elements:

  • document.getElementById() / getElementsByClassName() / getElementByTagName()
  • document.querySelector() / querySelectorAll()

The above methods can all help us obtain DOM elements, but which one can save more performance, getElementXX or querySelectorXX? We together look:

First, we need to know the difference between the DOM collections obtained by the two:

Let's take a look at the results printed by the two examples below?

If you execute it in the browser, you will find that the DOM element obtained using getElementsByClassName finally prints a length of 2, and the DOM element obtained using querySelectorAll finally prints a length of 1.

This is because getElementXX obtains an HTML collection, which is a " presumed real-time state ", which means that when the underlying document object is updated, it will also be updated accordingly.

What querySelectorXX obtains is a NodeList collection. It will not return the document structure in real time, but will only save the DOM structure at the time of current acquisition.

Because of the real-time update of getElementXX, its performance will be much worse than querySelector. If you are not careful, it can easily lead to an infinite loop:

Therefore, use querySelectorXX to obtain DOM elements as much as possible. If you must use getElementXX, please make a shallow copy of [...boxs] to avoid its "assumed real-time state".

 2. Reduce unnecessary DOM operations

We should all know that the operation of js and the operation of DOM in the browser are independent. The implementation of js is called JScript and is located in the jscript.dll file; the implementation of DOM exists in another library called mshtml.dll.

Since the two are completely independent, if the two need to communicate, you can imagine building a "viaduct" between them, and each time they pass, you need to collect a "toll". Doesn't it sound very performance-intensive? So we should try to minimize passing through this "viaduct".

Try caching the DOM:

As shown in the figure below: In the first method, we will pass through the "viaduct" twice. If we want to obtain more attributes, we need to pass through the "viaduct" more times. In the second type, we cache the DOM, and no matter how many attribute values ​​are obtained, we only need to go through the "viaduct" once.

 Unify update DOM operations as much as possible:

As shown in the figure below: Judging from the execution time of the following two functions innerHTMLLoop and innerHTMLLoop2, it is much more efficient to cache the content that needs to be updated first, and then update it to the page at once.

There is another scenario, for example, we want to insert 1000 <li/> nodes into ul.container at one time. According to the conventional idea, we would do this:

If we do this, it is like adding the container 1,000 times, which is equivalent to passing the "viaduct" 1,000 times. How can we update 1,000 operations through one operation?

There is actually a plan:

We can generate a virtual node object  document.createDocumentFragement  to add 1000 updates to this virtual node first, and then update them uniformly to the container.

Another option is:

Add .contianer{display:none;} to the cotianer.appendClind sub-element, and finally display it through display:block. Elements with display: none will not cause page rearrangement and redrawing, and will also reduce performance. 

We can choose the method of use according to our own usage scenarios. Just remember to reduce frequent DOM operations .

3. Reduce the number of page rearrangements and redraws

Above we mentioned updating and adding DOM elements in the page. In fact, as long as the elements in the page change, it will cause the page to be rearranged or redrawn.

Of course, the browser is not stupid. Rearrangement consumes performance. It will not reflow and redraw for you as soon as you need to redraw. It will cache a queue to reflow and redraw regularly, but it has some attributes. Will force page reflow and redraw :

  • offsetTop/Left/Width/Height
  • scrollTop/Left/Width/Height
  • clientTop/Left/Width/Height

Therefore, we should try to reduce the use of this type of attribute as much as possible. If we want to use it, we should cache it and avoid obtaining it frequently

That’s the end of today’s content, I hope it’s helpful to everyone.

Guess you like

Origin blog.csdn.net/weixin_46422035/article/details/121697060