One article to understand large file upload

One solution:
the front-end uppy.js
and the back-end tusd+minio (both implemented in Go, a binary file can be run by throwing it up, and the same domain name needs a pre-agent)
clustering to implement the tus protocol by itself
, but it is not necessary for ordinary small projects. Just run it on a single machine and you're done.
This should be the simplest solution for split-point resuming of very large files, and it saves the front and back ends from defining this kind of thing for a long time.

details

Compared with the traditional small file upload, large file upload needs to pay attention to some details to ensure the stability and reliability of the upload process. The following are some details that need attention:

  1. File fragment size: The file fragment size should be considered comprehensively based on factors such as network bandwidth and server performance. It is generally recommended to set it to 1MB~10MB to avoid upload failure caused by too large fragments.
  2. File fragment serial number: When uploading in fragments, you need to specify a unique serial number for each fragment, so that fragments can be sorted and merged during the upload process.
  3. Partition upload progress: In order to improve user experience, it is necessary to display the upload progress in real time. The upload progress can be displayed by calculating the ratio of the uploaded part size to the total file size in real time through the front-end.
  4. Breakpoint resume upload: breakpoint resume upload is the key to ensure upload reliability. It is necessary to record the uploaded fragment information on the backend, so that when the network is abnormal or the upload is interrupted, the upload can continue from the place where it was interrupted last time.
  5. File verification: After the upload is complete, the uploaded file needs to be verified to ensure the integrity and correctness of the file.
  6. Concurrent upload limit: For large file uploads, it is necessary to limit the number of fragments uploaded at the same time to avoid server overload and excessive bandwidth usage.
  7. Upload speed control: The upload speed needs to be controlled to avoid network congestion and errors during the upload process.
  8. Merge fragments: After the upload is complete, all fragments need to be merged into a complete file, which needs to take into account issues such as the order of fragments and file paths.
  9. File storage: You need to consider where and how files are stored for subsequent access and management. After uploading large files, they need to be stored. Common storage methods include local storage, object storage, and distributed file systems. Different storage methods have different characteristics and advantages and disadvantages, and they need to be selected according to business requirements.
  10. Upload file type restriction: You need to set the upload file type restriction according to business needs to avoid uploading illegal files.
  11. Upload timeout processing: When the upload time is too long or the network is interrupted, corresponding timeout processing is required to avoid long-term occupation of bandwidth or server resources.
  12. Upload failure retry: During the upload process, upload failure may occur, and the fragments that failed to upload need to be retransmitted to ensure the integrity of the upload.
  13. File upload authorization: For scenarios where file upload authorization is required, the uploading user needs to be authenticated and authorized to ensure the legality and security of the uploaded file.
  14. Renaming the uploaded file name: In order to ensure the uniqueness of the file, it may be necessary to rename the uploaded file, such as adding a timestamp or a random string.
  15. File backup and disaster recovery: In order to ensure the security and reliability of files, uploaded files need to be backed up and disaster recovered. Backup and disaster recovery can be carried out through multi-computer room deployment, multi-copy storage, remote disaster recovery, etc.

Simple long file transfer
Continue from last time.

slice upload

simple slice

Rationale: blobs are immutable like strings, but can be sliced. file inherits all blob methods.
The file file.slice(start, end)is cut using the method, so record the block size of each cut, and the position of each starting cut.

const handleClickUpload = async () => {
    
    
  const formData = new FormData()

  const _bigFile = fileList.value[0].file
  const _fileSize = _bigFile.size
  const _chunkSize = 1 * 1024 * 1024 // 1MB 注意:文件大小单位是字节
  let _currentSize = 0

  while (_currentSize < _fileSize) {
    
    
    // 切片
    let _chunk = _bigFile.slice(_currentSize, _currentSize + _chunkSize)
    
    formData.append('avatar', _chunk, _bigFile.name)
    
    _currentSize += _chunkSize

    // 上传
    await fetch('http://localhost:8888/upload', {
    
    
      method: 'post',
      body: formData
    })
    
    // 进度,注意一定要在本次上传完毕后更改,要不然同步任务先执行直接 100%
    progress.value = Math.min((_currentSize / _fileSize) * 100, 100)
    // upload(formData)
  }
}

MD5 check file consistency

SparkMD5.js
SparkMD5 is a JavaScript library for computing MD5 hashes of strings. It can be used in browser and Node.js environment. SparkMD5 uses a Bit-based streaming hash algorithm, which is efficient and reliable. It is widely used to calculate the MD5 value of files, as well as in cryptography and data integrity verification.

// 计算字符串MD5哈希
const hash = SparkMD5.hash('hello world');
console.log(hash); // 输出: 5eb63bbbe01eeed093cb22bb8f5acdc3

// 计算文件MD5哈希
const fileInput = document.querySelector('input[type="file"]');
const file = fileInput.files[0];
const fileReader = new FileReader();

fileReader.onload = function() {
    
    
  const spark = new SparkMD5.ArrayBuffer();
  spark.append(fileReader.result);
  const hash = spark.end();
  console.log(hash); // 输出: 文件的MD5哈希值
};

fileReader.readAsArrayBuffer(file);

Why do you need to calculate MD5?
Because the instant transfer function needs to use MD5 to judge whether the file has been uploaded.
Before sending, send the hash value of the file to the server, and the server will keep the hash of the uploaded file. After a comparison, it will know whether the file has been uploaded repeatedly, and notify the client not to upload, and the progress bar will be 100%.

WebWorker

It should be noted that when cutting a large file, such as 10G, if it is divided into 1Mb, 10,000 slices will be generated, and the hash value of the entire file will also be calculated. As we all know, js is a single-threaded model. If the calculation process is in the main thread, our page will inevitably crash directly. At this time, it is time for our Web Worker to play.
WebWorker
webWorker loads third-party modules
In worker.js, we need to slice the file, and return the array of these slices and the hash value of the entire file to the main thread.

But there is a problem. The buffer needs to be added to the spark, that is, the blob obtained after file cutting needs to be fileReader.readAsArrayBuffer(blob)converted into a buffer. Most importantly, the result has to be fileReader.onload = fn(){}fetched in the callback, which is an asynchronous operation. spark.append(e.target.result)This leads to a problem. The order of fragments added to ( )spark may be out of order, which will lead to a wrong value when calculating the overall hash at the end.

Solution: Recursively process the next shard.
Only when it is clear that the previous fragment has been added to spark, will the next fragment be cut.

// worker.js
importScripts("./SparkMD5.js");
// 接收文件对象及切片大小
self.onmessage = (e) => {
    
    
  const {
    
     file, DefualtChunkSize } = e.data;
  const blobSlice =
    File.prototype.slice ||
    File.prototype.mozSlice ||
    File.prototype.webkitSlice;
  const chunks = Math.ceil(file.size / DefualtChunkSize); // 分片数
  let currentChunk = 0 // 当前分片索引
  const chunkList = [] // 分片数组
  const spark = new SparkMD5.ArrayBuffer();
  const fileReader = new FileReader();

  fileReader.onload = function (e) {
    
    
 
    spark.append(e.target.result);
    currentChunk++;

    if (currentChunk < chunks) {
    
    
      loadNext();
    } else {
    
    
      const fileHash = spark.end();
      console.info("finished computed hash", fileHash);
      // 此处为重点,计算完成后,仍然通过postMessage通知主线程
      postMessage({
    
     fileHash, chunkList });
      self.close() // 关闭线程
    }
  };

  fileReader.onerror = function () {
    
    
    console.warn("oops, something went wrong.");
  };

  function loadNext() {
    
    
    let start = currentChunk * DefualtChunkSize;
    let end =
      start + DefualtChunkSize >= file.size
        ? file.size
        : start + DefualtChunkSize;
    const chunk = blobSlice.call(file, start, end);
    chunkList.push({
    
    chunk, currentChunk})
    fileReader.readAsArrayBuffer(chunk);
  }

  loadNext();
};

// 响应给主线程的数据结构
// { fileHash, [{blobChunk, currentChunk}, {}, {}...] }

upload shards

Each shard is uploaded as a separate file. Rather than uploading the entire array of slices together, this is for finer control and is also a prerequisite for concurrent uploads.

const uploadChunk = (fileChunk, fileHash, fileName) => {
    
    

  const formData = new FormData()
  formData.append('chunk', fileChunk.chunk); // 每个分片
  formData.append('chunkIndex', fileChunk.currentChunk); // 块索引
  formData.append('fileName', fileName); // 文件名称
  formData.append('fileHash', fileHash); // 整个文件的 hash

  return lcRequest.post({
    
    
    url: '/upload',
    data: formData
  })
}

If you want to upload the fragments of the entire file, you only need to loop through the fragment array. The following example uploads the fragments one by one in strict order.

merge request

After the upload is complete, a merge request is usually sent to the server, informing the server that the upload has been completed and the fragments can be merged.

// 合并请求
const mergeChunkRequest = (filename, fileHash) => {
    
    
  return lcRequest.post({
    
    
    url: '/api/merge',
    data: {
    
     filename, hash: fileHash }
  })
}
worker.postMessage({
    
     file: file, DefualtChunkSize: chunkSize })
worker.onmessage = async e => {
    
    
  
	const {
    
     chunkList, fileHash } = e.data

  // 循环上传分片
	for (const chunk of chunkList) {
    
    
    const uploadChunkFinish = await uploadChunk(chunk, fileHash, file.name, file.size)
    console.log(uploadChunkFinish);
  }

  // 合并请求
  const res = await mergeChunkRequest(file.name, fileHash)
  console.log(res);
}

Breakpoint resume and second transfer

Second pass

Before sending a file, you need to verify whether the file has been uploaded.

const verifyUpload = (filename, hash) => {
    
    
  return lcRequest.post({
    
     // lcRequest 封装自 axios
    url: '/verify',
    data: {
    
     filename, hash },
    headers: {
    
     'Content-Type': 'application/json' },
  })
}

The server interface generally has three responses:

  1. The file has been uploaded, return true
  2. The file is not uploaded, return false
  3. The file is half uploaded, return the uploaded size, and the form field name of the file. (error)

http

The third kind of response is resumed transmission. In fact, by returning the uploaded size, it is not appropriate. It only applies to single-file uploads, or when the parts are uploaded strictly in order.

const resumeUpload = (uploadedSize, fileSize, defualtChunkSize) => {
    
    
  const progress = 100 * uploadedSize / fileSize

  // 计算恢复上传的切片索引
  const index = uploadedSize % defualtChunkSize
  return {
    
    index, progress}
}

If the front-end is uploading concurrently, it may happen that the previous fragment is disconnected and the subsequent fragment has been uploaded. At this point, the server needs to clearly return which fragments have been uploaded.
The server generally returns the index of the uploaded shards. If the server does not distinguish between fully uploaded parts and partly uploaded parts, it will return their indexes. Then just judge which of these fragments is relatively small in size, and discard the small ones directly.
We get the complete slice array, filter out the uploaded slices, and get the unuploaded slice array, and just upload one by one at this time.

// uploadedList 为 [切片1, 切片2, ..., 切片n] 
// 已上传切片的地址文件名为文件hash+分片索引。 eg:[13717432cb479f2f51abce2ecb318c13-1.mp3]

const resumeUpload = (uploadedList, chunkList) => {
    
    

  // 从服务器返回的数据中拿到分片的索引
  const uploadedIndexList = uploadedList.map(item => {
    
    
    return item.match(/-(\d+)\./)
  })
  // 过滤出未上传的切片
  const resumeChunkList = chunkList.filter(item => {
    
    
    return !uploadedIndexList.includes(item.currentChunk)
  })
  
  return resumeChunkList 
}

The main structure of the upload process

worker.postMessage({
    
     file: file, DefualtChunkSize: chunkSize })
worker.onmessage = async e => {
    
    
  const {
    
     chunkList, fileHash } = e.data

  // 秒传验证
  const {
    
     uploadedSignal, uploadedList } = await verifyUpload(file.name, fileHash)
  if (uploadedSignal === true) {
    
    
    // 秒传,进度条百分之百
    progress.value = 100
  } else if (uploadedSignal === false && uploadedList.length === 0) {
    
    
    // 从0上传
  	...
  } else {
    
    
    // 续传
    const {
    
     resumeProgress, resumeChunkList } = resumeUpload(uploadedList, chunkList)
    ...
  }
}

upload progress

Single file upload progress

fetch cannot monitor upload progress, xhr can. Axios also provides monitoring of upload and download progress.
Axios monitor upload progress

  • onUploadProgress: Configure the callback for upload progress
  • onDownloadProgress: Configure the callback for download progress

They are all browser-only configurations.

axios.post('/upload', formData, {
    
     onUploadProgress: uploadProgress })

const uploadProgress = (progressEvent) => {
    
    
  // 计算上传进度,loaded 和 total 是该事件对象固有的属性
  const percentCompleted = Math.round((progressEvent.loaded * 100) / progressEvent.total);
  console.log(percentCompleted);
}

During the upload process, the callback function configured by onUploadProgress will be called opportunistically.
image.png

Multipart upload progress

Because we are uploading fragments one by one, if we use the default axios progress monitoring scheme, only the progress of a certain fragment will be displayed, and it is impossible to tell the progress of which fragment, let alone the overall progress. How much is it, so some calculations need to be done here, and the calculation method is also very simple, that is, add up the uploaded file sizes of each segment and divide by the total size.

Fragmentation is equivalent to individual files, so this method can also be said to be an overall progress calculation method for multiple file uploads.

upload progress from 0

const progressArr = [] // 记录每个分片已经上传的大小
const uploadProgress = (progressEvent, chunkIndex, totalSize) => {
    
    
  if (progressEvent.total) {
    
    
    // 将当前分片的已经上传的大小按分片索引保存在数组中
    progressArr[chunkIndex] = progressEvent.loaded * 100;
    // reduce 累加每块分片已上传的部分
    const curTotal = progressArr.reduce(
      (accumulator, currentValue) => accumulator + currentValue,
      0,
    );
    // 计算百分比进度
    progress.value = Math.min((curTotal / totalSize).toFixed(2), 100)
    // setProgress((curTotal / totalSize).toFixed(2)); // 发布订阅者模式
  }
};

The progress bar can apply the publish-subscriber mode to receive data changes, and of course it can not be used.

Note: By default, the monitoring callback of axios upload progress has only one parameter. If you want to pass multiple parameters, you can wrap it with a function.
For example, the above callback that handles monitoring has multiple parameters in it, so it cannot be directly bound to it onUploadProgress.

onUploadProgress: (progressEvent) => {
    
    
  return uploadProgress(progressEvent, formData.get('chunkIndex', fileSize)
})

Because the progress calculation requires the total size of the file, an additional fileSize is added to the parameter of the multipart upload function.

const uploadChunk = (fileChunk, fileHash, fileName, fileSize) => {
    
    
  const formData = new FormData()
  
  formData.append('file', fileChunk.chunk); // 切片
  formData.append('chunkIndex', fileChunk.currentChunk); // 块索引
  formData.append('filename', fileName); // 文件名称
  formData.append('hash', fileHash); // 文件 hash

  return lcRequest.post({
    
    
    url: '/api/upload',
    data: formData,
    // 默认回调只有一个参数,想要传递两个,可以用函数包裹一层
    onUploadProgress: (progressEvent) => uploadProgress(progressEvent, formData.get('chunkIndex'), fileSize)
  })
}

Resuming the progress of the interrupted upload

From the above, we can get the index array of the uploaded fragments. Since the progress calculation is progressArrrealized by recording the uploaded size of each fragment in , it is enough to fill up the uploaded fragments first progressArr.
Modify the resumeUpload function resumeUpload to return the uploaded index array:

// 断点续传
const resumeUpload = (uploadedList, chunkList) => {
    
    
  const uploadedIndexList = uploadedList.map(item => {
    
    
    return item.match(/-(\d+)\./)
  })
  const resumeChunkList = chunkList.filter(item => {
    
    
    return !uploadedIndexList.includes(item.currentChunk)
  })
  return {
    
     uploadedIndexList, resumeChunkList }
}

First fill in the size of the uploaded fragments into progressArr, and then accumulate:

const {
    
     uploadedIndexList, resumeChunkList } = resumeUpload(uploadedList, chunkList)

// 填入 progressArr
uploadedIndexList.forEach(index => {
    
    
  progressArr[index] = chunkSize
});

// 将未上传的分片数组中将分片一个一个循环串行上传
for (const chunk of resumeChunkList) {
    
    
  const uploadChunkFinish = await uploadChunk(chunk, fileHash, file.name, file.size)
  console.log(uploadChunkFinish);
}

const res = await mergeChunkRequest(file.name, fileHash)
console.log(res);

cancel upload

To stop uploading can be to close the current page or browser, or the backend is disconnected, or it can be actively stopped by using the frontend API. The previous two situations are some unexpected situations. Here we mainly discuss the situation of using the front-end API to actively stop.
To cancel the axios request
, just add a signal parameter to the slice upload method to configure axios.

// 切片上传
const controller = new AbortController();
const uploadChunk = (fileChunk, fileHash, fileName, fileSize) => {
    
    
	...
  
  return lcRequest.post({
    
    
    url: '/api/upload',
    data: formData,
    signal: controller.signal,
    onUploadProgress: (progressEvent) => uploadProgress(progressEvent, formData.get('chunkIndex'), fileSize)
  })
const handleClickAbortUpload = () => {
    
    
  controller.abort()
}

pause, resume request

Suspend and resume is to cancel the request and resume the breakpoint.

concurrency control

The above fragment upload method is serial, and the efficiency is not high enough. We can upload concurrently, but the browser can make up to 6 concurrent requests, so concurrent requests need to be controlled.
concurrent request control

async function controlConcurrency(requests, limit) {
    
    
  const results = []; // 结果数组
  const running = []; // 并发数组

  for (const request of requests) {
    
    
    const promise = request();

    results.push(promise);
    running.push(promise);

    if (running.length >= limit) {
    
    
      await Promise.race(running);
    }

    promise.finally(() => {
    
    
      running.splice(running.indexOf(promise), 1);
    });
  }

  return Promise.all(results);
}

Sharded arrays are uploaded concurrently:

// 分片数组并发上传
const chunkListUpload = (chunkList, uploadChunk, fileHash, file, concurrentControlFn, limit = 3) => {
    
    
  // 构建请求数组
  const requestsArr = []
  for (let index = 0; index < chunkList.length; index++) {
    
    
    requestsArr[index] = () => uploadChunk(chunkList[index], fileHash, file.name, file.size)
  }
  // 并发控制
  return concurrentControlFn(requestsArr, limit)
}

Complete upload control process:

worker.postMessage({
    
     file: file, DefualtChunkSize: chunkSize })
worker.onmessage = async e => {
    
    
  const {
    
     chunkList, fileHash } = e.data

  // 1. 秒传验证
  const {
    
     uploadedSignal, uploadedList } = await verifyUpload(file.name, fileHash)
  if (uploadedSignal === true) {
    
    
    // 秒传,进度条百分之百
    progress.value = 100
  } else if (uploadedSignal === false && uploadedList.length === 0) {
    
    
    // 2. 从0上传
  	const chunkListUploadRes = await chunkListUpload(chunkList, uploadChunk, fileHash, file, controlConcurrency, 2)
    console.log(chunkListUploadRes);
  	// 合并
    const res = await mergeChunkRequest(file.name, fileHash)
    console.log(res);
  } else {
    
    
    // 3. 续传
    const {
    
     resumeProgress, resumeChunkList } = resumeUpload(uploadedList, chunkList)
    // 填入 progressArr
    uploadedIndexList.forEach(index => {
    
    
      progressArr[index] = chunkSize
    });
    
    const chunkListUploadRes = await chunkListUpload(resumeChunkList, uploadChunk, fileHash, file, controlConcurrency, 2)
    console.log(chunkListUploadRes);
    
    const res = await mergeChunkRequest(file.name, fileHash)
    console.log(res);
  }
}

server

The server here simply implements fragment splicing, but does not implement breakpoint resume and second transmission.

Process:

  1. Multipart upload

Multi-part upload is actually a multi-file upload, so you need to use the array method of multer to receive multiple parts. Fragments mainly exist in a temporary folder named hash. Using hash as the name ensures the uniqueness of the temporary folder, and also lays the foundation for resuming uploads from breakpoints.

  1. merge shards

Merging shards is to sort the shards in the temporary folder according to the index, and then read them sequentially and output them as a file.
The fs-extra module can easily process files and merge fragments:fse.appendFileSync(合并到的文件, 分片的二进制数据)

const Koa = require("koa");
const cors = require("koa-cors");
const Router = require("koa-router");
const bodyParser = require("koa-bodyparser");
const multer = require("@koa/multer");
const path = require('path')
const fse = require("fs-extra");

const app = new Koa();

const router = new Router({
    
    
  prefix: '/api',
})

const UPLOAD_DIR = './upload/'
// 新建一个以 hash 为名的临时文件夹保存分片,然后读取分片合并成原始文件,然后删除临时文件夹
// 发现 req 中获取不到 hash 值,于是暂时固定一个临时文件夹使用,用完了就删,然后再新建。
// 但是这样就无法保存传了一半的文件了。
const tempDir = './upload/temp'

// 上传文件存储配置
const storage = multer.diskStorage({
    
    
  destination: (req, file, cb) => {
    
    
    // console.log(req.body.hash); // req 中无法获取到 hash 值,用来创建临时文件夹
    cb(null, tempDir);
  },
  filename: (req, file, cb) => {
    
    
    cb(null, `${
      
      file.originalname}_${
      
      Date.now()}${
      
      path.extname(file.originalname)}`);
  },
});

const upload = multer({
    
     storage });

router.get("/test", (ctx) => {
    
    
  console.log("get request coming");
  ctx.response.body = JSON.stringify({
    
     msg: "get success" });
});

// 解析 formdata 文件,分片上传
// array(字段)必须和前端发送的表单字段一致
router.post("/upload", upload.array('file'), (ctx, dispatch) => {
    
    
  const {
    
     filename, hash } = ctx.request.body;
  const fileInfoList = ctx.request.files
  const uploadFile = []
  fileInfoList.forEach(item => {
    
    
    uploadFile.push(item.originalname)
  })
  ctx.response.body = JSON.stringify({
    
     msg: "post success" , uploadFile });
});

// 合并分片
router.post("/merge", async (ctx) => {
    
    
  const {
    
     filename, hash } = ctx.request.body;

  // 获取文件后缀
  const ext = filename.slice(filename.lastIndexOf('.') + 1)

  // 从临时文件夹中获取分片
  const chunks = await fse.readdir(tempDir);

  // 将分片排序后,循环读入内存然后添加到一个文件中
  chunks
    .sort((a, b) => Number(a) - Number(b))
    .forEach((chunk) => {
    
    
      // 合并文件
      fse.appendFileSync(
        path.join(UPLOAD_DIR, `${
      
      hash}.${
      
      ext}`), 
    		fse.readFileSync(path.join(tempDir, chunk))
      );
    });
  
  // 删除临时文件夹
  fse.removeSync(tempDir);
  fse.mkdir(tempDir)
  
  // 可能会返回文件下载地址
  ctx.body = "合并成功";
});


app.use(bodyParser());
app.use(cors());
app.use(router.routes());
app.use(router.allowedMethods());

app.listen(8888, "127.0.0.1", () => {
    
    
  console.log("server start...");
});

full code

<template>
  <div class="upload-box">
    <div class="head">
      <h3> 切片上传</h3>
      <input class="upload-btn" type="file" @change="handleFileChange">
    </div>
    <div class="preview-box">
      待上传文件:
      <div class="files-info">
        <template v-for="item in fileList" :key="item.lastModified">
          <div class="card">
            <div class="delete">
              <!-- <img :src="item.preUrl" alt="预览图"> -->
              <video :src="item.preUrl"></video>
              <div class="name">{
   
   { item.file.name }}</div>
            </div>
          </div>
        </template>
      </div>
    </div>
    <template v-if="progress != 0">
      <div class="progress">
        <div class="background">
          <div class="foreground" :style="{ width: progress + '%' }"></div>
        </div>
      </div>
    </template>
    <div class="result-box">
      上传反馈:
      <div class="result">

        <div>{
   
   { result.msg }}</div>
        <div>{
   
   { result.uploadFile }}</div>
      </div>
    </div>
    <div class="action">
      <button @click="handleClickUpload">上传</button>
      <button @click="handleClickAbortUpload">取消</button>
      <a href="" download="">xia zai</a>
    </div>
  </div>
</template>

<script setup>
import {
      
       ref } from 'vue'
import lcRequest from '../service/request';
import {
      
       controlConcurrency } from '../utils/concurrentControl';

const result = ref('')
const fileList = ref([])
const progress = ref('0')
const fileType = ['png', 'mp4', 'mkv']
const chunkSize = 10 * 1024 * 1024

// 切片上传
const controller = new AbortController();
const uploadChunk = (fileChunk, fileHash, fileName, fileSize) => {
      
      
  console.log(fileChunk);
  console.log(fileSize);
  const formData = new FormData()
  formData.append('file', fileChunk.chunk); // 切片
  formData.append('chunkIndex', fileChunk.currentChunk); // 块索引
  formData.append('filename', fileName); // 文件名称
  formData.append('hash', fileHash); // 文件 hash

  return lcRequest.post({
      
      
    url: '/api/upload',
    data: formData,
    signal: controller.signal,
    // 默认回调只有一个参数,想要传递两个,可以用函数包裹一层
    onUploadProgress: (progressEvent) => uploadProgress(progressEvent, formData.get('chunkIndex'), fileSize)
  })
}

// 进度计算
const progressArr = []
const uploadProgress = (progressEvent, chunkIndex, totalSize) => {
      
      
  if (progressEvent.total) {
      
      
    // 将当前分片的已经上传的字符保存在数组中
    progressArr[chunkIndex] = progressEvent.loaded * 100;
    // reduce 累加每块分片已上传的部分
    const curTotal = progressArr.reduce(
      (accumulator, currentValue) => accumulator + currentValue,
      0,
    );
    // 计算百分比进度
    progress.value = Math.min((curTotal / totalSize).toFixed(2), 100)
  }
};

// 合并请求
const mergeChunkRequest = (filename, fileHash) => {
      
      
  return lcRequest.post({
      
      
      url: '/api/merge',
      data: {
      
       filename, hash: fileHash }
    })
}

// 文件类型限制
const fileTypeCheck = (file, typesArr) => {
      
      
  const index = file.name.lastIndexOf('.')
  const ext = file.name.slice(index + 1)

  if (!typesArr.includes(ext)) {
      
      
    alert(`${ 
        ext} 文件不允许上传!`)
    return false
  }
  return true
} 

// 获取文件对象
const handleFileChange = e => {
      
      
  const file = e.target.files[0]
  const isLegalType = fileTypeCheck(file.name, fileType)
  // 将每次选择的文件添加到待上传数组中
  isLegalType || fileList.value.push({
      
       file, preUrl: URL.createObjectURL(file) })
}

// 启动 worker 线程
const worker = new Worker('/src/utils/worker.js')

// 秒传
const verifyUpload = (filename, hash) => {
      
      
  // return { uploadedSignal: false, uploadedList: [] } // 强制从0开始上传
  return lcRequest.post({
      
      
    url: '/verify',
    data: {
      
       filename, hash },
    headers: {
      
       'Content-Type': 'application/json' },
  })
}
  
// 断点续传
const resumeUpload = (uploadedList, chunkList) => {
      
      

  const uploadedIndexList = uploadedList.map(item => {
      
      
    return item.match(/-(\d+)\./)
  })
  const resumeChunkList = chunkList.filter(item => {
      
      
    return !uploadedIndexList.includes(item.currentChunk)
  })
  return {
      
       uploadedIndexList, resumeChunkList }
}

// 分片数组并发上传
const chunkListUpload = (chunkList, uploadChunk, fileHash, file, concurrentControlFn, limit = 3) => {
      
      
  // 构建请求数组
  const requestsArr = []
  for (let index = 0; index < chunkList.length; index++) {
      
      
    requestsArr[index] = () => uploadChunk(chunkList[index], fileHash, file.name, file.size)
  }
  return concurrentControlFn(requestsArr, limit)
}

// 点击上传
const handleClickUpload = async () => {
      
      
  // 测试的是大文件单文件上传,如果是多文件,遍历文件数组处理即可
  const file = fileList.value[0].file
  console.log(file);

  worker.postMessage({
      
       file: file, DefualtChunkSize: chunkSize })
  
  worker.onmessage = async e => {
      
      
    const {
      
       chunkList, fileHash } = e.data

    // 1. 秒传验证
    const {
      
       uploadedSignal, uploadedList } = await verifyUpload(file.name, fileHash)
    if (uploadedSignal === true) {
      
      
      progress.value = 100 // 秒传,进度条百分之百
    } else if (uploadedSignal === false && uploadedList.length === 0) {
      
      
      // 2. 从0上传
      const chunkListUploadRes = await chunkListUpload(chunkList, uploadChunk, fileHash, file, controlConcurrency, 2)
      console.log(chunkListUploadRes);
      const res = await mergeChunkRequest(file.name, fileHash)
      console.log(res);
    } else {
      
      
      // 3. 断点续传
      const {
      
       uploadedIndexList, resumeChunkList } = resumeUpload(uploadedList, chunkList)
      // 先填入 progressArr
      uploadedIndexList.forEach(index => {
      
      
        progressArr[index] = chunkSize
      });
      
      const chunkListUploadRes = await chunkListUpload(resumeChunkList, uploadChunk, fileHash, file, controlConcurrency, 2)
      console.log(chunkListUploadRes);
      const res = await mergeChunkRequest(file.name, fileHash)
      console.log(res);
    }
  }
}

// 取消请求
const handleClickAbortUpload = () => {
      
      
  controller.abort()
  alert("请求取消")
}

</script>
<style scoped>
.upload-box {
      
      
  width: 600px;
  padding: 0 10px;
  border: 1px solid;
  border-radius: 10px;
}

.head {
      
      
  width: 100%;
  height: 50px;
  display: flex;
  justify-content: space-between;
  align-items: center;
  border-bottom: 1px solid;
}

.preview-box {
      
      
	height: 500px;
}

.files-info {
      
      
  display: flex;
  justify-content: space-evenly;
  flex-flow: row wrap;
  overflow: auto;
}

.card .name {
      
      
  padding: 4px 0;
  text-align: center;
}

.result-box {
      
      
  height: 200px;
  border-bottom: 1px solid;
}

.result-box .result {
      
      
  display: flex;
  flex-flow: column;
  justify-content: space-evenly;
}

.action {
      
      
  height: 50px;
  display: flex;
  justify-content: space-around;
  align-items: center;
}

img,
video {
      
      
  width: 100px;
  object-fit: contain;
}

/* 进度条 */
.progress {
      
      
  height: 10px;
  width: 100%;
  display: flex;
  justify-content: center;
  align-items: center;
}

.progress .background {
      
      
  height: 4px;
  width: 200px;
  border: 1px solid;
  border-radius: 10px;
}

.progress .background .foreground {
      
      
  height: 100%;
  background-color: pink;
}
</style>

Guess you like

Origin blog.csdn.net/qq_43220213/article/details/130120575