Now that the terminal performance is so powerful, why does the front end still need performance optimization?

introduction

In fact, I wanted to write this article for a long time. This article will not involve any knowledge of performance optimization, but simply discuss what performance optimization is and why terminal performance is so powerful now.

Nowadays, such powerful terminal hardware devices as ten-core or twenty-core CPU, high-speed memory and solid-state hard disk blessing still need to be optimized for performance? I often hear many front-end programmers say that after writing the code, just let the device run, but changing the card is all the device? Or who cares about performance optimization now, how can it be so obvious in tens of milliseconds?

Today we will discuss these issues objectively, whether it is necessary or not.

what is performance

Simply put, performance can be understood as a result of multi-dimensional reference indicators.

The first thing to do is CPU performance . In terms of subdivision dimensions, the general CPU will refer to our clock cycle, number of cores, number of threads, L3 cache capacity and scheduling efficiency .

Then there is the memory. The indicators of the memory are nothing more than the bandwidth and throughput of the data transmitted per unit time. In fact, there are many indicators for the memory, so I will not introduce them for the sake of space.

GPU, generally a graphics card, can also be called a coprocessor. It mainly performs some simple calculations and assists the CPU to handle some graphics rendering work. The biggest advantage is that it can process data concurrently, which is very different from the linear processing method of the CPU. Graphics information can be processed more efficiently, thereby improving frame rate and display effect.

For users, in terms of interaction, the most intuitive manifestation of performance is whether there is obvious frame drop or operation delay. If the freeze is particularly obvious, or the first screen loads for more than three seconds, it will lead to the loss of 90% of users, which is deadly.

Does the high-performance terminal still need to be optimized?

Ok, let's start today's topic. For this question, let's give an example:

Suppose we have a 128KB memory and need to read all the data in the memory. We assume that the CPU needs 0.1ms to process 1KB of memory. Take pseudo code as an example:

const Memory = new Array(128);
for(let i = 0; i < Memory.length; i++) {
    /** 需要 12.8ms **/
}
复制代码

这种线性的循环操作对我们来说最平常不过了,也就12.8ms的时间,用大O表示法的话,可以表示为O(n),目前看起来问题并不大。那么我们换一种写法:

const Memory = new Array(128);
for(let i = 0; i < Memory.length; i++) {
   for(let j = 0; j < Memory.length; j++) {
       /** 需要 1638.4ms **/
   }
}
复制代码

这次双重循环(比如冒泡排序)结果或许会有点明显,大O表示法为O(n ^ 2)是原来的128倍,达到了1638.4ms,其实在刷新频率为60Hz的屏幕上,每一帧的时间仅为16.7ms,如果没有通过异步处理的话,已经会造成卡顿了(其实时间并不会这么长,只是假定条件)。那三层循环呢?就会达到O(n ^ 3),时间就会达到209715.2ms;

由此结果可想而知,如果我们电脑内存8GB的情况下,在用这种方式去处理,没有做一些优化的话,简直不敢想象,就算没处理1kb的时间是0.01ms的效率, 也不会有什么明显的改变吧?因为CPU发展至今,性能始终是线性增长的,可我们一段不好的代码,导致的性能问题将会是以指数形式增长的,远远超过我们目前硬件技术的发展;

总结

其实今天想说的问题很简答,性能优化无论在任何时候都是有必要的,比如前端目前新出的一些API,活着任何开源的框架和应用的迭代,都没有忘记在不断的对性能进行优化,其实优化性能的同时也是对自己技术的一个提升,希望无论是前端还是后端的小伙伴,都希望重视起来,写出一段烂可能只需要一个小时,写出一段优秀的代码可能需要十倍二十倍的时间去打磨。

Guess you like

Origin juejin.im/post/7083841453893353485