Realization of uploading large files in pieces [full version of front and back]

In the general product development process, you may encounter the need for uploading video functions. We often use methods such as limiting the size of the video to prevent the upload request from timing out and causing the upload to fail. At this time, uploading the video in pieces may have a small experience optimization for your project.

The front end of this article is vue, and the backend is based on PHP-based fragment uploading. Friends who need it can learn from it.

Multipart upload

1. What is multipart upload

Fragment upload is to divide the file to be uploaded into multiple data blocks (we call it Part) according to a certain size and upload them separately. After uploading, the server will process all the uploaded files. Summarize and integrate into original files.

2. Scenario of uploading in pieces

(1) Large file upload

(2) The network environment is not good, and there are scenarios where retransmission risks are required

3. Realize the process steps

a. Scheme 1, conventional steps, steps implemented in this paper

Divide the file to be uploaded into data blocks of the same size according to certain segmentation rules;

Initialize a multipart upload task and return the unique identifier of this multipart upload;

Send each fragmented data block according to a certain strategy (serial or parallel);

After the sending is completed, the server judges whether the uploaded data is complete. If it is complete, the data block is synthesized to obtain the original file.

b. Scheme 2

The front-end (client) needs to fragment the file according to the fixed size, and when requesting the back-end (server), the serial number and size of the fragment should be brought

The server creates a conf file to record the location of the blocks. The length of the conf file is the total number of blocks. Every time a block is uploaded, a 127 is written to the conf file. The position that has not been uploaded is 0 by default, and the position that has been uploaded is 0. Byte.MAX_VALUE 127 (this step is the core step to realize breakpoint resume and second transfer)

The server calculates the start position according to the segment number given in the request data and the size of each segment (the segment size is fixed and the same), and writes the read file segment data into the file.

front-end code

template


// 上传按钮样式

import method

import { uploadByPieces } from "@/utils/upload"; //引入uploadByPieces方法
methods
// 分片上传
videoSaveToUrl(file) {
  uploadByPieces({
    file: file, // 获取到的视频文件
    pieceSize: 3, // 分片大小  这里是3M一片
    success: (data) => {
      this.formValidate.video_link = data.file_path;
      this.progress = 100;    // 上传成功 进度条为100%
    },
    error: (e) => {
      this.$Message.error(e.msg);  //报错信息
    },
    uploading: (chunk, allChunk) => {
      this.videoIng = true;   // 上传时进度条展示 根据需要添加
      let st = Math.floor((chunk / allChunk) * 100);  这里是用上传的第几片除以总片数进行百分比计算
      this.progress = st;
    },
  });
  return false;
},
utils/upload

utils/upload

import md5 from 'js-md5' //引入MD5加密
import { upload } from '@/api/upload.js'  // 这里指前端调用接口的api方法
export const uploadByPieces = ({ file, pieceSize = 2, success, error, uploading }) => {
    // 如果文件传入为空直接 return 返回
    if (!file) return
    let fileMD5 = ''// 总文件列表
    const chunkSize = pieceSize * 1024 * 1024 // 5MB一片
    const chunkCount = Math.ceil(file.size / chunkSize) // 总片数
    console.log(chunkSize, chunkCount)
    // 获取md5
    const readFileMD5 = () => {
        // 读取视频文件的md5
        console.log("获取文件的MD5值")
        let fileRederInstance = new FileReader()
        console.log('file', file)
        fileRederInstance.readAsBinaryString(file)
        fileRederInstance.addEventListener('load', e => {
            let fileBolb = e.target.result
            fileMD5 = md5(fileBolb)
            console.log('fileMD5', fileMD5)
            console.log("文件未被上传,将分片上传")
            readChunkMD5()
        })
    }
    const getChunkInfo = (file, currentChunk, chunkSize) => {
        let start = currentChunk * chunkSize
        let end = Math.min(file.size, start + chunkSize)
        let chunk = file.slice(start, end)
        return { start, end, chunk }
    }
    // 针对每个文件进行chunk处理
    const readChunkMD5 = async () => {
        // 针对单个文件进行chunk上传
        for (var i = 0; i < chunkCount; i++) {
            const { chunk } = getChunkInfo(file, i, chunkSize)
            console.log("总片数" + chunkCount)
            console.log("分片后的数据---测试:" + i)
            await uploadChunk({ chunk, currentChunk: i, chunkCount })
        }
    }
    const uploadChunk = (chunkInfo) => {
        // progressFun()
        return new Promise((resolver, reject) => {
            let config = {
                headers: {
                    'Content-Type': 'multipart/form-data'
                }
            }
            // 创建formData对象,下面是结合不同项目给后端传入的对象。
            let fetchForm = new FormData()
            fetchForm.append('chunkNumber', chunkInfo.currentChunk + 1)  // 第几片
            fetchForm.append('chunkSize', chunkSize)  // 分片大小的限制  例如限制 5M
            fetchForm.append('currentChunkSize', chunkInfo.chunk.size)  // 每一片的大小
            fetchForm.append('file', chunkInfo.chunk)   //每一片的文件
            fetchForm.append('filename', file.name)  // 文件名 
            fetchForm.append('totalChunks', chunkInfo.chunkCount) //总片数
            fetchForm.append('md5', fileMD5)
            upload(fetchForm, config).then(res => {
                console.log("分片上传返回信息:", res)
                if (res.data.code == 1) {
                    // // 结合不同项目 将成功的信息返回出去
                    // 下面如果在项目中没有用到可以不用打开注释
                    uploading(chunkInfo.currentChunk + 1, chunkInfo.chunkCount)
                    resolver(true)
                } else if (res.data.code == 2) {
                    if (chunkInfo.currentChunk < chunkInfo.chunkCount - 1) {
                        console.log("分片上传成功")
                    } else {
                        // 当总数大于等于分片个数的时候
                        if ((chunkInfo.currentChunk + 1) == chunkInfo.chunkCount) {
                            console.log("文件开始------合并成功")
                            success(res.data)
                        }
                    }
                }
            }).catch((e) => {
                error && error(e)
            })
        })
    }
    readFileMD5() // 开始执行代码
}

backend code

controller

/**
     * 视频分片上传
     * @return mixed
     */
    public function videoUpload()
    {
        $data = $this->request->postMore([
            ['chunkNumber', 0],//第几分片
            ['currentChunkSize', 0],//分片大小
            ['chunkSize', 0],//总大小
            ['totalChunks', 0],//分片总数
            ['file', 'file'],//文件
            ['md5', ''],//MD5
            ['filename', ''],//文件名称
        ]);
        $res = $this->service->videoUpload($data, $_FILES['file']);
        return app('json')->success($res);
    }

method

/**
     * 视频分片上传
     * @param $data
     * @param $file
     * @return mixed
     */
    public function videoUpload($data, $file)
    {
        $public_dir = app()->getRootPath() . 'public';
        $dir = '/uploads/attach/' . date('Y') . DIRECTORY_SEPARATOR . date('m') . DIRECTORY_SEPARATOR . date('d');
        $all_dir = $public_dir . $dir;
        if (!is_dir($all_dir)) mkdir($all_dir, 0777, true);
        $filename = $all_dir . '/' . $data['filename'] . '__' . $data['chunkNumber'];
        move_uploaded_file($file['tmp_name'], $filename);
        $res['code'] = 0;
        $res['msg'] = 'error';
        $res['file_path'] = '';
        if ($data['chunkNumber'] == $data['totalChunks']) {
            $blob = '';
            for ($i = 1; $i <= $data['totalChunks']; $i++) {
                $blob .= file_get_contents($all_dir . '/' . $data['filename'] . '__' . $i);
            }
            file_put_contents($all_dir . '/' . $data['filename'], $blob);
            for ($i = 1; $i <= $data['totalChunks']; $i++) {
                @unlink($all_dir . '/' . $data['filename'] . '__' . $i);
            }
            if (file_exists($all_dir . '/' . $data['filename'])) {
                $res['code'] = 2;
                $res['msg'] = 'success';
                $res['file_path'] = $dir . '/' . $data['filename'];
            }
        } else {
            if (file_exists($all_dir . '/' . $data['filename'] . '__' . $data['chunkNumber'])) {
                $res['code'] = 1;
                $res['msg'] = 'waiting';
                $res['file_path'] = '';
            }
        }
        return $res;
    }

In the process of uploading in pieces, the cooperation between the front end and the back end is required. For example, the file size of the upload block number at the front and back ends must be the same, otherwise there will be problems with the upload. Secondly, file-related operations normally require building a file server, such as using fastdfs, hdfs, etc.

In this sample code, when the computer is configured with 4-core memory and 8G, it takes more than 30 minutes to upload a file with a size of 24G. The main time is spent on the calculation of the md5 value at the front end, and the writing speed at the back end is still relatively fast.

If the project team thinks that building a file server by itself takes too much time, and the project needs only upload and download, then it is recommended to use Ali's oss server, and its introduction can be found on the official website:

https://help.aliyun.com/product/31815.html

Ali's oss is essentially an object storage server, not a file server, so if there is a need to delete or modify a large number of files, oss may not be a good choice.

The above is all the codes of the front and back of the video upload in pieces. Among them, the small partners who need it can add video upload verification, breakpoint resume and other operations by themselves.

Guess you like

Origin blog.csdn.net/weixin_64051447/article/details/129853820