When the amount of data returned by the backend is too large, the method of displaying it in batches at regular intervals by the frontend

Background: When I was working on a large-screen Kanban recently, there was a chart backend that would return a lot of adjustment data at one time (for example: 100,000 pieces, or more).


Why front-end processing: because it is a kanban chart, pagination cannot be done, and it can only be returned once

Implementation:

Idea: Group the data returned by the backend into groups, divide them into heaps, and then combine them again

  • Let's first write a function to divide 100,000 pieces of data into heaps
  • The so-called heap actually means to intercept data of a certain length at a time.
  • For example, intercepting 10 pieces of data at a time,头一次截取0~9,第二次截取10~19等固定长度的截取
  • For example, the original data is:[1,2,3,4,5,6,7]
  • Suppose we divide the pile into 3 pieces, then the result is a two-dimensional array
  • Right now:[ [1,2,3], [4,5,6], [7]]
  • Then traverse this two-dimensional array to get the data of each item, that is, the data of each pile
  • Then use the timer a little bit, a bunch of assignments to render

1. The functions of grouping, batching and heaping are as follows: (one pile is divided into 10)

averageFn(arr) {
  let i = 0; // 1. 从第0个开始截取
  let result = []; // 2. 定义结果,结果是二维数组
  while (i < arr.length) { // 6. 当索引等于或者大于总长度时,即截取完毕
    // 3. 从原始数组的第一项开始遍历
    result.push(arr.slice(i, i + 10)); // 4. 在原有十万条数据上,一次截取10个用于分堆
    i = i + 10; // 5. 这10条数据截取完,再截取下十条数据,以此类推
  }
  return result; // 7. 最后把结果丢出去即可
}

2. Create a timer and assign values ​​to render sequentially, as follows:

async plan() {
      this.loading = true;
      const res = await axios.get("http://ashuai.work:10000/bigData");
      this.loading = false;
      let twoDArr = averageFn(res.data.data);
      for (let i = 0; i < twoDArr.length; i++) {
        // 相当于在很短的时间内创建许多个定时任务去处理
        setTimeout(() => {
          this.arr = [...this.arr, ...twoDArr[i]]; // 赋值渲染
        }, 1000 * i); // 17 * i // 注意设定的时间间隔... 17 = 1000 / 60
      }
},

This method is equivalent to creating many scheduled tasks to process in a short period of time. Too many scheduled tasks consume resources.

If this method is adopted, the suggestions are as follows:

  • It can lengthen the time of scheduled tasks to avoid lagging or even paralysis caused by executing scheduled tasks multiple times in a short period of time
  • When intercepting data, you can intercept more pieces at a time, and try to divide the heap as little as possible

In fact, this method has the idea of ​​paging with large data volume

The above test works! ! ! Thanks for the support! ! !

Disclaimer: This article refers to the following authors

Author: Shui Rong Shui Fu
Link: https://juejin.cn/post/7205101745936416829
Source: Rare Earth Nuggets
The copyright belongs to the author. For commercial reprint, please contact the author for authorization, for non-commercial reprint, please indicate the source.

Guess you like

Origin blog.csdn.net/qq_38543537/article/details/131555860
Recommended