Interviewer: Do you realize a large file uploads and HTTP

Preface
this time the interviewer are quite busy, frequently appears in the blog post title, although I do not particularly want to rub the heat, but can not think of a good title -. - Cengceng on Cengceng :)

In fact when I did the interview was asked this question, but one line coding programming problem, although the idea was correct, but in the end also is not entirely correct answers

After some time spent finishing the next thoughts, then how we should achieve a large file uploads, and how to implement the functions of HTTP upload it in?

This article from zero and build a front-end server, to achieve a large file upload demo and resume broken

Article misunderstanding place, welcome that will correct the first time, there is a better way to achieve that you want to leave a comment

Large file uploads
the whole idea of
the front end of
the front end of a large file uploads Most online article has been given a solution, the core is the use of Blob.prototype.slice method, and an array of similar slice method, slice method can return a call original file slice

So that we can file the maximum number of good slice cut into a slice, then the aid can be http concurrency based on pre-set, uploading multiple slices, so from the original transfer a large file into multiple simultaneous transmission small slice of files, can greatly reduce the upload time

In addition, as it is concurrent to the service side of the order may change, so we also need to record each slice order

Server
server responsible for accepting these sections, and all received after the merger slice sliced

Here again two corollary questions

When the merger slice, slice that is when the transfer is complete
how to merge the slices
first question needs to be with the front end, the front end of each section carries a maximum number of slices of information, automatically merge when the service termination by the number of slices, you can also send a request for additional proactive notification server consolidation sliced

The second question, how to merge specific slice it? Here you can use the api fs.appendFileSync nodejs, it can append data to the specified file synchronously, that is, when the service termination by all sections, first create a final document, and then gradually to merge all sections of this document in

talk is cheap, show me the code, then we use the code to achieve the above ideas

The distal end portion of
the distal end Vue used as a development framework, there is not much demand for interfaces, may be native, used in consideration of visual UI framework as a element-ui

Upload Control
First, create a control file selection, monitoring change events, and upload button

<template>   <div>    <input type="file" @change="handleFileChange" />    <el-button @click="handleUpload">上传</el-button>  </div></template><script>export default {  data: () => ({    container: {      file: null    }  }),  methods: {    async handleFileChange(e) {      const [file] = e.target.files;      if (!file) return;      Object.assign(this.$data, this.$options.data());      this.container.file = file;    },    async handleUpload() {}  }};</script>

Interviewer: Do you realize a large file uploads and HTTP

Request logic
consideration of general versatility, there is no request by a third party libraries, but by doing simple native XMLHttpRequest encapsulation layer reQuest

request({      url,      method = "post",      data,      headers = {},      requestList    }) {      return new Promise(resolve => {        const xhr = new XMLHttpRequest();        xhr.open(method, url);        Object.keys(headers).forEach(key =>          xhr.setRequestHeader(key, headers[key])        );        xhr.send(data);        xhr.onload = e => {          resolve({            data: e.target.response          });        };      });    }

Upload slice
then realized the more important uploading, upload need to do two things

Slice the file
will be transmitted to the server sliced

<template>  <div>    <input type="file" @change="handleFileChange" />    <el-button @click="handleUpload">上传</el-button>  </div></template><script>+ const LENGTH = 10; // 切片数量export default {  data: () => ({    container: {      file: null,+     data: []    }  }),  methods: {    request() {},    async handleFileChange() {},+    // 生成文件切片+    createFileChunk(file, length = LENGTH) {+      const fileChunkList = [];+      const chunkSize = Math.ceil(file.size / length);+      let cur = 0;+      while (cur < file.size) {+        fileChunkList.push({ file: file.slice(cur, cur + chunkSize) });+        cur += chunkSize;+      }+      return fileChunkList;+    },+   // 上传切片+    async uploadChunks() {+      const requestList = this.data+        .map(({ chunk }) => {+          const formData = new FormData();+          formData.append("chunk", chunk);+ formData.append("hash", hash);+          formData.append("filename", this.container.file.name);+          return { formData };+        })+        .map(async ({ formData }) =>+          this.request({+            url: "http://localhost:3000",+            data: formData+          })+        );+      await Promise.all(requestList); // 并发切片+    },+    async handleUpload() {+      if (!this.container.file) return;+      const fileChunkList = this.createFileChunk(this.container.file);+      this.data = fileChunkList.map(({ file },index) => ({+        chunk: file,+        hash: this.container.file.name + "-" + index // 文件名 + 数组下标+      }));+      await this.uploadChunks();+    }  }};</script>

When you click on the upload button, the file slice, slice the number of calls createFileChunk through a constant Length control, here set to 10, divided into 10 slices about to file upload

CreateFileChunk the while loop and the slice method returns an array of slices into fileChunkList

When generating the file sections, each slice need to give the hash as an identifier, herein a temporary file name + subscript, so that the rear end can know the current slice is the first of several slices, a slice for the subsequent merger

All subsequent calls uploadChunks upload files slices, slices the file, slice hash, and the file name into FormData, and then calls the last step of the request function returns a proimise, the last call Promise.all concurrent upload all the slices

Merge request transmitted
here using the second combined slices whole idea mentioned embodiment, i.e., the active distal ends merge notification service, the front end also requires additional transmission request, when the service slicing combined active termination request by

<template>  <div>    <input type="file" @change="handleFileChange" />    <el-button @click="handleUpload">上传</el-button>  </div></template><script>export default {  data: () => ({    container: {      file: null    },    data: []  }),  methods: {    request() {},    async handleFileChange() {},    createFileChunk() {},    // 上传切片,同时过滤已上传的切片    async uploadChunks() {      const requestList = this.data        .map(({ chunk }) => {          const formData = new FormData();          formData.append("chunk", chunk);          formData.append("hash", hash);          formData.append("filename", this.container.file.name);          return { formData };        })        .map(async ({ formData }) =>          this.request({            url: "http://localhost:3000",            data: formData          })        );      await Promise.all(requestList);+      // 合并切片+     await this.mergeRequest();    },+    async mergeRequest() {+      await this.request({+        url: "http://localhost:3000/merge",+        headers: {+          "content-type": "application/json"+        },+        data: JSON.stringify({+          filename: this.container.file.name+        })+      });+    },        async handleUpload() {}  }};</script>

The server portion of the
simple module built using http server

const http = require("http");const server = http.createServer();server.on("request", async (req, res) => {  res.setHeader("Access-Control-Allow-Origin", "*");  res.setHeader("Access-Control-Allow-Headers", "*");  if (req.method === "OPTIONS") {    res.status = 200;    res.end();    return;  }});server.listen(3000, () => console.log("正在监听 3000 端口"));

Receiving sections
FormData use multiparty transmitted packet processing front end

In the callback multiparty.parse, files saved FormData parameter files, fields parameter to save the file field FormData Africa

const http = require("http");const path = require("path");const fse = require("fs-extra");const multiparty = require("multiparty");const server = http.createServer();+ const UPLOAD_DIR = path.resolve(__dirname, "..", "target"); // 大文件存储目录server.on("request", async (req, res) => {  res.setHeader("Access-Control-Allow-Origin", "*");  res.setHeader("Access-Control-Allow-Headers", "*");  if (req.method === "OPTIONS") {    res.status = 200;    res.end();    return;  }+  const multipart = new multiparty.Form();+  multipart.parse(req, async (err, fields, files) => {+    if (err) {+      return;+    }+    const [chunk] = files.chunk;+    const [hash] = fields.hash;+    const [filename] = fields.filename;+    const chunkDir = `${UPLOAD_DIR}/${filename}`;+   // 切片目录不存在,创建切片目录+    if (!fse.existsSync(chunkDir)) {+      await fse.mkdirs(chunkDir);+    }+    // 重命名文件+    await fse.rename(chunk.path, `${chunkDir}/${hash}`);+    res.end("received file chunk");+  });});server.listen(3000, () => console.log("正在监听 3000 端口"));

Interviewer: Do you realize a large file uploads and HTTP

chunk the object after viewing multiparty process, path is the path to store temporary files, size is the size of the temporary file, mention may be used fs.rename way to move to rename the temporary file in multiparty document, the document is sliced

In an interview with sliced ​​files, create folders to store slice, since the front end when sending each additional slice carries a unique hash value, so to hash as the file name, move the slice slice from the temporary folder path to the final The results are as follows
Interviewer: Do you realize a large file uploads and HTTP

The combined sections
All sections merge request upon receiving the front end of the transmission, the server folder are merged

const http = require("http");const path = require("path");const fse = require("fs-extra");const server = http.createServer();const UPLOAD_DIR = path.resolve(__dirname, "..", "target"); // 大文件存储目录+ const resolvePost = req =>+   new Promise(resolve => {+     let chunk = "";+     req.on("data", data => {+       chunk += data;+     });+     req.on("end", () => {+       resolve(JSON.parse(chunk));+     });+   });+ // 合并切片+ const mergeFileChunk = async (filePath, filename) => {+   const chunkDir = `${UPLOAD_DIR}/${filename}`;+   const chunkPaths = await fse.readdir(chunkDir);+   await fse.writeFile(filePath, "");+   chunkPaths.forEach(chunkPath => {+     fse.appendFileSync(filePath, fse.readFileSync(`${chunkDir}/${chunkPath}`));+     fse.unlinkSync(`${chunkDir}/${chunkPath}`);+   });+   fse.rmdirSync(chunkDir); // 合并后删除保存切片的目录+ };server.on("request", async (req, res) => {  res.setHeader("Access-Control-Allow-Origin", "*");  res.setHeader("Access-Control-Allow-Headers", "*");  if (req.method === "OPTIONS") {    res.status = 200;    res.end();    return;  }+   if (req.url === "/merge") {+     const data = await resolvePost(req);+     const { filename } = data;+     const filePath = `${UPLOAD_DIR}/${filename}`;+     await mergeFileChunk(filePath, filename);+     res.end(+       JSON.stringify({+         code: 0,+         message: "file merged success"+       })+     );+   }});server.listen(3000, () => console.log("正在监听 3000 端口"));

Since the front end when sending requests merger will carry the name of the file, the server based on the file name can be found in the previous step slice file created folder

Then use fs.writeFileSync create an empty file, the file name of the file is empty slice folder name + suffix combination, then continue to merge the slices from the slice into the empty file folder by fs.appendFileSync, every time after the completion of the merger delete this slice, all slices are merged Once finished last slice delete folders
Interviewer: Do you realize a large file uploads and HTTP

Thus a simple large file upload is complete, then we'll expand this function on the basis of some additional

Show upload progress bar
upload progress in two ways, one is the upload progress of each slice, and the other is the entire file upload progress, but the progress of the upload file upload progress of each slice is calculated from the basis, so we first realized sliced upload progress

切片进度条
XMLHttpRequest 原生支持上传进度的监听,只需要监听 upload.onprogress 即可,我们在原来的 request 基础上传入 onProgress 参数,给 XMLHttpRequest 注册监听事件

 // xhr    request({      url,      method = "post",      data,      headers = {},+      onProgress = e => e,      requestList    }) {      return new Promise(resolve => {        const xhr = new XMLHttpRequest();+        xhr.upload.onprogress = onProgress;        xhr.open(method, url);        Object.keys(headers).forEach(key =>          xhr.setRequestHeader(key, headers[key])        );        xhr.send(data);        xhr.onload = e => {          resolve({            data: e.target.response          });        };      });    }

由于每个切片都需要触发独立的监听事件,所以还需要一个工厂函数,根据传入的切片返回不同的监听函数

在原先的前端上传逻辑中新增监听函数部分

    // 上传切片,同时过滤已上传的切片    async uploadChunks(uploadedList = []) {      const requestList = this.data        .map(({ chunk }) => {          const formData = new FormData();          formData.append("chunk", chunk);          formData.append("filename", this.container.file.name);          return { formData };        })        .map(async ({ formData }) =>          this.request({            url: "http://localhost:3000",            data: formData,+           onProgress: this.createProgressHandler(this.data[index]),          })        );      await Promise.all(requestList);       // 合并切片      await this.mergeRequest();    },    async handleUpload() {      if (!this.container.file) return;      const fileChunkList = this.createFileChunk(this.container.file);      this.data = fileChunkList.map(({ file },index) => ({        chunk: file,+       index,        hash: this.container.file.name + "-" + index+       percentage:0      }));      await this.uploadChunks();    }    +   createProgressHandler(item) {+      return e => {+        item.percentage = parseInt(String((e.loaded / e.total) * 100));+      };+    }

每个切片在上传时都会通过监听函数更新 data 数组对应元素的 percentage 属性,之后把将 data 数组放到视图中展示即可

文件进度条
将每个切片已上传的部分累加,除以整个文件的大小,就能得出当前文件的上传进度,所以这里使用 Vue 计算属性

  computed: {       uploadPercentage() {          if (!this.container.file || !this.data.length) return 0;          const loaded = this.data            .map(item => item.size * item.percentage)            .reduce((acc, cur) => acc + cur);          return parseInt((loaded / this.container.file.size).toFixed(2));        } }

最终视图如下
Interviewer: Do you realize a large file uploads and HTTP

断点续传
断点续传的原理在于前端/服务端需要记住已上传的切片,这样下次上传就可以跳过之前已上传的部分,有两种方案实现记忆的功能

前端使用 localStorage 记录已上传的切片 hash
服务端保存已上传的切片 hash,前端每次上传前向服务端获取已上传的切片
第一种是前端的解决方案,第二种是服务端,而前端方案有一个缺陷,如果换了个浏览器就失去了记忆的效果,所以这里选取后者

生成 hash
无论是前端还是服务端,都必须要生成文件和切片的 hash,之前我们使用文件名 + 切片下标作为切片 hash,这样做文件名一旦修改就失去了效果,而事实上只要文件内容不变,hash 就不应该变化,所以正确的做法是根据文件内容生成 hash,所以我们修改一下 hash 的生成规则

这里用到另一个库 spark-md5,它可以根据文件内容计算出文件的 hash 值,另外考虑到如果上传一个超大文件,读取文件内容计算 hash 是非常耗费时间的,并且会引起 UI 的阻塞,导致页面假死状态,所以我们使用 web-worker 在 worker 线程计算 hash,这样用户仍可以在主界面正常的交互

由于实例化 web-worker 时,参数是一个 js 文件路径且不能跨域,所以我们单独创建一个 hash.js 文件放在 public 目录下,另外在 worker 中也是不允许访问 dom 的,但它提供了importScripts 函数用于导入外部脚本,通过它导入 spark-md5

// /public/hash.jsself.importScripts("/spark-md5.min.js"); // 导入脚本// 生成文件 hashself.onmessage = e => {  const { fileChunkList } = e.data;  const spark = new self.SparkMD5.ArrayBuffer();  let percentage = 0;  let count = 0;  const loadNext = index => {    const reader = new FileReader();    reader.readAsArrayBuffer(fileChunkList[index].file);    reader.onload = e => {      count++;      spark.append(e.target.result);      if (count === fileChunkList.length) {        self.postMessage({          percentage: 100,          hash: spark.end()        });        self.close();      } else {        percentage += 100 / fileChunkList.length;        self.postMessage({          percentage        });        // 递归计算下一个切片        loadNext(count);      }    };  };  loadNext(0);};

在 worker 线程中,接受文件切片 fileChunkList,利用 FileReader 读取每个切片的 ArrayBuffer 并不断传入 spark-md5 中,每计算完一个切片通过 postMessage 向主线程发送一个进度事件,全部完成后将最终的 hash 发送给主线程

spark-md5 需要根据所有切片才能算出一个 hash 值,不能直接将整个文件放入计算,否则即使不同文件也会有相同的 hash,具体可以看官方文档

spark-md5

接着编写主线程与 worker 线程通讯的逻辑

+   // 生成文件 hash(web-worker)+    calculateHash(fileChunkList) {+      return new Promise(resolve => {+       // 添加 worker 属性+        this.container.worker = new Worker("/hash.js");+        this.container.worker.postMessage({ fileChunkList });+        this.container.worker.onmessage = e => {+          const { percentage, hash } = e.data;+          this.hashPercentage = percentage;+          if (hash) {+            resolve(hash);+          }+        };+      });    },    async handleUpload() {      if (!this.container.file) return;      const fileChunkList = this.createFileChunk(this.container.file);+     this.container.hash = await this.calculateHash(fileChunkList);      this.data = fileChunkList.map(({ file },index) => ({+       fileHash: this.container.hash,        chunk: file,        hash: this.container.file.name + "-" + index, // 文件名 + 数组下标        percentage:0      }));      await this.uploadChunks();    }   

主线程使用 postMessage 给 worker 线程传入所有切片 fileChunkList,并监听 worker 线程发出的 postMessage 事件拿到文件 hash

加上显示计算 hash 的进度条,看起来像这样
Interviewer: Do you realize a large file uploads and HTTP

至此前端需要将之前用文件名作为 hash 的地方改写为 workder 返回的这个 hash
Interviewer: Do you realize a large file uploads and HTTP

服务端则使用 hash 作为切片文件夹名,hash + 下标作为切片名,hash + 扩展名作为文件名,没有新增的逻辑
Interviewer: Do you realize a large file uploads and HTTP

文件秒传
在实现断点续传前先简单介绍一下文件秒传

所谓的文件秒传,即在服务端已经存在了上传的资源,所以当用户再次上传时会直接提示上传成功

文件秒传需要依赖上一步生成的 hash,即在上传前,先计算出文件 hash,并把 hash 发送给服务端进行验证,由于 hash 的唯一性,所以一旦服务端能找到 hash 相同的文件,则直接返回上传成功的信息即可

  • async verifyUpload(filename, fileHash) {+ const { data } = await this.request({+ url: "http://localhost:3000/verify",+ headers: {+ "content-type": "application/json"+ },+ data: JSON.stringify({+ filename,+ fileHash+ })+ });+ return JSON.parse(data);+ }, async handleUpload() { if (!this.container.file) return; const fileChunkList = this.createFileChunk(this.container.file); this.container.hash = await this.calculateHash(fileChunkList);+ const { shouldUpload } = await this.verifyUpload(+ this.container.file.name,+ this.container.hash+ );+ if (!shouldUpload) {+ this.$message.success("秒传:上传成功");+ return;+ } this.data = fileChunkList.map(({ file }, index) => ({ fileHash: this.container.hash, index, hash: this.container.hash + "-" + index, chunk: file, percentage: 0 })); await this.uploadChunks(); }

秒传其实就是给用户看的障眼法,实质上根本没有上传
Interviewer: Do you realize a large file uploads and HTTP

:)

服务端的逻辑非常简单,新增一个验证接口,验证文件是否存在即可

+ const extractExt = filename =>+  filename.slice(filename.lastIndexOf("."), filename.length); // 提取后缀名const UPLOAD_DIR = path.resolve(__dirname, "..", "target"); // 大文件存储目录const resolvePost = req =>  new Promise(resolve => {    let chunk = "";    req.on("data", data => {      chunk += data;    });    req.on("end", () => {      resolve(JSON.parse(chunk));    });  });server.on("request", async (req, res) => {  if (req.url === "/verify") {+    const data = await resolvePost(req);+    const { fileHash, filename } = data;+    const ext = extractExt(filename);+    const filePath = `${UPLOAD_DIR}/${fileHash}${ext}`;+    if (fse.existsSync(filePath)) {+      res.end(+        JSON.stringify({+          shouldUpload: false+        })+      );+    } else {+      res.end(+        JSON.stringify({+          shouldUpload: true+        })+      );+    }  }});server.listen(3000, () => console.log("正在监听 3000 端口"));

暂停上传
讲完了生成 hash 和文件秒传,回到断点续传

断点续传顾名思义即断点 + 续传,所以我们第一步先实现“断点”,也就是暂停上传

原理是使用 XMLHttpRequest 的 abort 方法,可以取消一个 xhr 请求的发送,为此我们需要将上传每个切片的 xhr 对象保存起来,我们再改造一下 request 方法

   request({      url,      method = "post",      data,      headers = {},      onProgress = e => e,+     requestList    }) {      return new Promise(resolve => {        const xhr = new XMLHttpRequest();        xhr.upload.onprogress = onProgress;        xhr.open(method, url);        Object.keys(headers).forEach(key =>          xhr.setRequestHeader(key, headers[key])        );        xhr.send(data);        xhr.onload = e => {+          // 将请求成功的 xhr 从列表中删除+          if (requestList) {+            const xhrIndex = requestList.findIndex(item => item === xhr);+            requestList.splice(xhrIndex, 1);+          }          resolve({            data: e.target.response          });        };+        // 暴露当前 xhr 给外部+        requestList?.push(xhr);      });    },

这样在上传切片时传入 requestList 数组作为参数,request 方法就会将所有的 xhr 保存在数组中了
Interviewer: Do you realize a large file uploads and HTTP

每当一个切片上传成功时,将对应的 xhr 从 requestList 中删除,所以 requestList 中只保存正在上传切片的 xhr

之后新建一个暂停按钮,当点击按钮时,调用保存在 requestList 中 xhr 的 abort 方法,即取消并清空所有正在上传的切片

 handlePause() {    this.requestList.forEach(xhr => xhr?.abort());    this.requestList = [];}

Interviewer: Do you realize a large file uploads and HTTP

点击暂停按钮可以看到 xhr 都被取消了
Interviewer: Do you realize a large file uploads and HTTP

恢复上传
之前在介绍断点续传的时提到使用第二种服务端存储的方式实现续传

由于当文件切片上传后,服务端会建立一个文件夹存储所有上传的切片,所以每次前端上传前可以调用一个接口,服务端将已上传的切片的切片名返回,前端再跳过这些已经上传切片,这样就实现了“续传”的效果

而这个接口可以和之前秒传的验证接口合并,前端每次上传前发送一个验证的请求,返回两种结果

服务端已存在该文件,不需要再次上传
服务端不存在该文件或者已上传部分文件切片,通知前端进行上传,并把已上传的文件切片返回给前端
所以我们改造一下之前文件秒传的服务端验证接口

const extractExt = filename =>  filename.slice(filename.lastIndexOf("."), filename.length); // 提取后缀名const UPLOAD_DIR = path.resolve(__dirname, "..", "target"); // 大文件存储目录const resolvePost = req =>  new Promise(resolve => {    let chunk = "";    req.on("data", data => {      chunk += data;    });    req.on("end", () => {      resolve(JSON.parse(chunk));    });  });  +  // 返回已经上传切片名列表+ const createUploadedList = async fileHash =>+   fse.existsSync(`${UPLOAD_DIR}/${fileHash}`)+    ? await fse.readdir(`${UPLOAD_DIR}/${fileHash}`)+    : [];server.on("request", async (req, res) => {  if (req.url === "/verify") {    const data = await resolvePost(req);    const { fileHash, filename } = data;    const ext = extractExt(filename);    const filePath = `${UPLOAD_DIR}/${fileHash}${ext}`;    if (fse.existsSync(filePath)) {      res.end(        JSON.stringify({          shouldUpload: false        })      );    } else {      res.end(        JSON.stringify({          shouldUpload: true,+         uploadedList: await createUploadedList(fileHash)        })      );    }  }});server.listen(3000, () => console.log("正在监听 3000 端口"));

接着回到前端,前端有两个地方需要调用验证的接口

点击上传时,检查是否需要上传和已上传的切片
点击暂停后的恢复上传,返回已上传的切片
新增恢复按钮并改造原来上传切片的逻辑

<template>  <div id="app">      <input        type="file"        @change="handleFileChange"      />       <el-button @click="handleUpload">上传</el-button>       <el-button @click="handlePause" v-if="isPaused">暂停</el-button>+      <el-button @click="handleResume" v-else>恢复</el-button>      //...    </div></template>+   async handleResume() {+      const { uploadedList } = await this.verifyUpload(+        this.container.file.name,+        this.container.hash+      );+      await this.uploadChunks(uploadedList);    },    async handleUpload() {      if (!this.container.file) return;      const fileChunkList = this.createFileChunk(this.container.file);      this.container.hash = await this.calculateHash(fileChunkList);+     const { shouldUpload, uploadedList } = await this.verifyUpload(        this.container.file.name,        this.container.hash      );      if (!shouldUpload) {        this.$message.success("秒传:上传成功");        return;      }      this.data = fileChunkList.map(({ file }, index) => ({        fileHash: this.container.hash,        index,        hash: this.container.hash + "-" + index,        chunk: file,        percentage: 0      }));+      await this.uploadChunks(uploadedList);    },   // 上传切片,同时过滤已上传的切片+   async uploadChunks(uploadedList = []) {      const requestList = this.data+        .filter(({ hash }) => !uploadedList.includes(hash))        .map(({ chunk, hash, index }) => {          const formData = new FormData();          formData.append("chunk", chunk);          formData.append("hash", hash);          formData.append("filename", this.container.file.name);          formData.append("fileHash", this.container.hash);          return { formData, index };        })        .map(async ({ formData, index }) =>          this.request({            url: "http://localhost:3000",            data: formData,            onProgress: this.createProgressHandler(this.data[index]),            requestList: this.requestList          })        );      await Promise.all(requestList);      // 之前上传的切片数量 + 本次上传的切片数量 = 所有切片数量时      // 合并切片+      if (uploadedList.length + requestList.length === this.data.length) {         await this.mergeRequest();+      }    }

Interviewer: Do you realize a large file uploads and HTTP

这里给原来上传切片的函数新增 uploadedList 参数,即上图中服务端返回的切片名列表,通过 filter 过滤掉已上传的切片,并且由于新增了已上传的部分,所以之前合并接口的触发条件做了一些改动

到这里断点续传的功能基本完成了

进度条改进
虽然实现了断点续传,但还需要修改一下进度条的显示规则,否则在暂停上传/接收到已上传切片时的进度条会出现偏差

切片进度条
由于在点击上传/恢复上传时,会调用验证接口返回已上传的切片,所以需要将已上传切片的进度变成 100%

   async handleUpload() {      if (!this.container.file) return;      const fileChunkList = this.createFileChunk(this.container.file);      this.container.hash = await this.calculateHash(fileChunkList);      const { shouldUpload, uploadedList } = await this.verifyUpload(        this.container.file.name,        this.container.hash      );      if (!shouldUpload) {        this.$message.success("秒传:上传成功");        return;      }      this.data = fileChunkList.map(({ file }, index) => ({        fileHash: this.container.hash,        index,        hash: this.container.hash + "-" + index,        chunk: file,+       percentage: uploadedList.includes(index) ? 100 : 0      }));      await this.uploadChunks(uploadedList);    },

uploadedList 会返回已上传的切片,在遍历所有切片时判断当前切片是否在已上传列表里即可

文件进度条
之前说到文件进度条是一个计算属性,根据所有切片的上传进度计算而来,这就遇到了一个问题
Interviewer: Do you realize a large file uploads and HTTP

点击暂停会取消并清空切片的 xhr 请求,此时如果已经上传了一部分,就会发现文件进度条有倒退的现象
Interviewer: Do you realize a large file uploads and HTTP

当点击恢复时,由于重新创建了 xhr 导致切片进度清零,所以总进度条就会倒退

解决方案是创建一个“假”的进度条,这个假进度条基于文件进度条,但只会停止和增加,然后给用户展示这个假的进度条

这里我们使用 Vue 的监听属性

  data: () => ({+    fakeUploadPercentage: 0  }),  computed: {    uploadPercentage() {      if (!this.container.file || !this.data.length) return 0;      const loaded = this.data        .map(item => item.size * item.percentage)        .reduce((acc, cur) => acc + cur);      return parseInt((loaded / this.container.file.size).toFixed(2));    }  },    watch: {+    uploadPercentage(now) {+      if (now > this.fakeUploadPercentage) {+        this.fakeUploadPercentage = now;+      }    }  },

当 uploadPercentage 即真的文件进度条增加时,fakeUploadPercentage 也增加,一旦文件进度条后退,假的进度条只需停止即可

至此一个大文件上传 + 断点续传的解决方案就完成了

总结
大文件上传

When using front end of the file upload large files Blob.prototype.slice slice, multiple slices concurrent upload, and finally sends a merge request notification server merge sections
after the server receives and stores the slice, a merge request is received for a multi-use fs.appendFileSync slices merge
upload.onprogress native XMLHttpRequest to monitor the progress of the upload slice
using Vue calculate property is calculated according to the progress of the entire file upload progress of each slice of
HTTP

Use spart-md5 The contents of the file is calculated file hash
may determine whether the server has uploaded the file via a hash, thereby directly prompt the user to upload was successful (second pass)
suspended uploaded slice through the abort method of XMLHttpRequest
before uploading the server returns a slice has been uploaded name, upload them to skip the front sections

Guess you like

Origin blog.51cto.com/14528283/2466427