Springboot+WebUploader elegantly implements the upload of very large files (2)

foreword

The book continues from the previous article, welcome back! Springboot+WebUploader elegantly implements the upload of large files (1) , which mainly describes the implementation ideas, principles, and main front-end code implementation of large file uploads. This article will focus on the following questions:

5. How do the front-end and back-end verify whether the fragment has been uploaded?

6. How does the backend handle multipart upload requests?

7. In the webuploader component, where is the request to merge file fragments triggered?

8. How does the backend merge fragment requests?

9. After the multipart upload fails, how to continue uploading at the breakpoint?

10. How to realize the upload progress bar?

Code

5. How do the front-end and back-end verify whether the fragment has been uploaded?

A command (before-send) inside the webuploader has completed the md5 calculation of the fragmented file and requested the background interface to verify whether the current fragmented file has been uploaded (see the third question). If it has been uploaded, it will directly skip After calling the current multipart upload interface, the uploadBeforeSend event will not be triggered again (when a part file is triggered before sending, it is mainly used to ask whether to add additional parameters, and this is the case when large files are uploaded in part. event may fire multiple times);

If it is not uploaded, the uploadBeforeSend event will be triggered, carrying some fragment parameter information to initiate a fragment upload request;

So how does the backend check whether the fragment is uploaded? as follows:

1. In the fragment file upload interface, after the fragment upload is successful, the relevant information of the fragment will be saved, such as: fragment file md5, file md5, file size, fragment storage location, start and end of fragment data block Location, total number of shards, etc. Here, redis is used to cache these shard information. Redis uses a hash data structure, where the key is the md5 of the file, the hashkey is "chunk_md5_" + the shard index, and the value is the md5 value of the shard file ; (Of course, you can also use a database or other storage media,)

2. When the interface is called, take out the md5 of the fragment according to the index position of the current fragment passed from the front end and compare it with the md5 of the fragment file passed from the front end. If they are the same, it means that the current fragment has been uploaded successfully; if not If they are the same, it means they have not been uploaded;

@PostMapping("/check")
public boolean check(String fileMd5,String chunk,String chunkMd5) {
    Object o = redisTemplate.opsForHash().get(fileMd5, "chunk_md5_"+chunk);
    if (chunkMd5.equals(o)) {
        return true;
    }
    return false;
}

6. How does the backend handle multipart upload requests?

The backend mainly does two things when processing multipart uploads:

第一,把分片文件保存在磁盘上或其他的网络存储介质上,这里需要注意一下分片文件的命名规则,尽量有规律一些,方便后面合并分片;这里分片文件的命名规则是:分片md5值+分片索引位置;

第二、保存分片相关的信息,在实际业务开发中可以考虑保存在缓存或数据库里,这里只是作了缓存;缓存的数据结构是hash,key是文件整体的md5值,hashKey与hashValue对应关系如下:

hashKey

hashValue

描述

“chunk_location_”+分片索引位置

分片文件的存储绝对路径

分片文件的存储位置

“chunk_start_end_”+分片索引位置

“起始位置”+“-”+“结束位置”

分片文件的在整体文件中字节的起始结束位置

"chunk_md5_"+分片索引位置

分片文件的md5值

分片文件的md5值

file_size

文件整体的字节数大小

文件整体的字节数大小

file_chunks

文件整体被分了多少片

文件整体被分了多少片

  /**
     * 分片上传接口
     *
     * @param request
     * @param multipartFile
     * @return
     * @throws IOException
     */
    @PostMapping("/upload")
    public String upload(HttpServletRequest request, MultipartFile multipartFile) {
        log.info("分片上传....");
        Map<String, String> requestParam = this.doRequestParam(request);
        String md5Value = requestParam.get("md5Value");//整体文件的md5值
        String chunkIndex = requestParam.get("chunk");//分片在所有分片文件中索引位置
        String start = requestParam.get("start");//当前分片在整个数据文件中的开始位置
        String end = requestParam.get("end");//当前分片在整个数据文件中的结束位置
        String chunks = requestParam.get("chunks");//整体文件总共被分了多少片
        String fileSize = requestParam.get("size");//整体文件大小
        String chunkMd5 = requestParam.get("chunkMd5");//分片文件的md5值
        String userDir = System.getProperty("user.dir");
        String chunkFilePath = userDir + File.separator + chunkMd5 + "_" + chunkIndex;
        File file = new File(chunkFilePath);
        try {
            multipartFile.transferTo(file);
            Map<String, String> map = new HashMap<>();
            map.put("chunk_location_" + chunkIndex, chunkFilePath);//分片存储路径
            map.put("chunk_start_end_" + chunkIndex, start + "_" + end);
            map.put("file_size", fileSize);
            map.put("file_chunks", chunks);
            map.put("chunk_md5_" + chunkIndex, chunkMd5);
            redisTemplate.opsForHash().putAll(md5Value, map);
        } catch (IOException e) {
            e.printStackTrace();
        } catch (IllegalStateException e) {
            e.printStackTrace();
        }
        return "success";
    }

7、webuploader组件中,合并文件分片的请求在哪里触发?

webuploader组件中,有一对事件分别是uploadSuccess和uploadError,当文件上传成功时,uploadSuccess触发;当文件上传失败时,uploadError触发;因此uploadSuccess事件刚好可以用来,向后台发起合并分片文件的请求;

//当文件上传成功时触发
uploader.on('uploadSuccess', function (file) {
    //大文件的所有分片上传成功后,请求后端对分片进行合并
    $.ajax({
        url: 'http://localhost:8080/file/merge',
        method: 'post',
        data: {'md5Value': file.wholeMd5, 'originalFilename': file.name},
        success: function (res) {
            alert('大文件上传成功!')
        }
    })
    $('#' + file.id).find('p.state').append('文件上传成功<br/>');
});

8、后端如何合并分片请求?

当所有的分片文件上传成功时会触发webuploader的uploadSuccess事件触发时机,然后调用后台的合并分片文件接口,合并分片文件接口的主要业务逻辑:

1、检验一下所有的分片是否全部上传完成(当分片上传成功时,会把分片md5值和文件整体总共分了多少片存储在redis里,存储时的hashKey是"chunk_md5_"+分片索引位置和file_chunks,如果存储的分片md5的数量与文件整体分片的数量一致,则表示所有的分片均已上传);

/**
 * 合并分片前检验文件整体的所有分片是否全部上传
 *
 * @param key
 * @return
 */
private boolean checkBeforeMerge(String key) {
    Map map = redisTemplate.opsForHash().entries(key);
    Object file_chunks = map.get("file_chunks");
    int i = 0;
    for (Object hashKey : map.keySet()) {
        if (hashKey.toString().startsWith("chunk_md5_")) {
            ++i;
        }
    }
    if (Integer.valueOf(file_chunks.toString())==(i)) {
        return true;
    }
    return false;
}

2、如果当前文件文件已经上传过,只是名字不同,那么md5值是相同的,直接拿出已经上传的文件按现在名字再复制一份;

3、在开始合并分片文件前,要先从redis中取出分片文件的存储位置,这里要特别注意一下,分片合并的顺序一定与索引位置的升序一致,否则合并的文件是无法打开或运行的;因为分片上传的过程是并发执行的,到达后端的顺序可能每次都不一样,但是各分片的索引位置不会变;因此可以在[0,文件分片总数量-1]之间遍历,从redis中依次取出分片文件的存储路径,并依次写入到一个新的文件里;

4、各个分片文件依次写入完成后,关闭输入流、输出流,并删除分片文件(分片合并成完整文件的时候,分片文件就没有用了,另外缓存的分片其他相关信息也没有用了,也可以删除了,当然在实际业务开发中,可根据具体的需求酌情保留);

/**
 * 合并分片文件接口
 *
 * @param request
 * @return
 * @throws IOException
 */
@PostMapping("/merge")
public String merge(HttpServletRequest request) throws IOException {
    log.info("合并分片...");
    Map<String, String> requestParam = this.doRequestParam(request);
    String md5Value = requestParam.get("md5Value");
    String originalFilename = requestParam.get("originalFilename");
    //校验切片是否己经上传完毕
    boolean flag = this.checkBeforeMerge(md5Value);
    if (!flag) {
        return "切片未完全上传";
    }
    //检查是否已经有相同md5值的文件上传;主要是对名字不同,而实际文件相同的文件,直接对原文件进行复制;
    Object file_location = redisTemplate.opsForHash().get(md5Value, "file_location");
    if (file_location != null) {
        String source = file_location.toString();
        File file = new File(source);
        if (!file.getName().equals(originalFilename)) {
            File target = new File(System.getProperty("user.dir") + File.separator + originalFilename);
            Files.copy(file.toPath(), target.toPath());
            return "success";
        }

    }
    //这里要特别注意,合并分片的时候一定要按照分片的索引顺序进行合并,否则文件无法使用;
    Integer file_chunks = Integer.valueOf(redisTemplate.opsForHash().get(md5Value, "file_chunks").toString());
    String userDir = System.getProperty("user.dir");
    File writeFile = new File(userDir + File.separator + originalFilename);
    OutputStream outputStream = new FileOutputStream(writeFile);
    InputStream inputStream = null;
    for (int i = 0; i < file_chunks; i++) {
        String tmpPath = redisTemplate.opsForHash().get(md5Value,"chunk_location_" + i).toString();
        File readFile = new File(tmpPath);
        inputStream = new FileInputStream(readFile);
        byte[] bytes = new byte[1024 * 1024];
        while ((inputStream.read(bytes) != -1)) {
            outputStream.write(bytes);
        }
        if (inputStream != null) {
            inputStream.close();
        }
    }
    if (outputStream != null) {
        outputStream.close();
    }
    redisTemplate.opsForHash().put(md5Value, "file_location", userDir + File.separator + originalFilename);
    this.delTmpFile(md5Value);
    return "success";
}
private void delTmpFile(String md5Value) throws JsonProcessingException {
    Map map = redisTemplate.opsForHash().entries(md5Value);
    List<String> list = new ArrayList<>();
    for (Object hashKey : map.keySet()) {
        if (hashKey.toString().startsWith("chunk_location")) {
            String filePath = map.get(hashKey).toString();
            File file = new File(filePath);
            boolean flag = file.delete();
            list.add(hashKey.toString());
            log.info("delete:" + filePath + ",:" + flag);
        }
        if (hashKey.toString().startsWith("chunk_start_end_")) {
            list.add(hashKey.toString());
        }
        if (hashKey.toString().startsWith("chunk_md5_")) {
            list.add(hashKey.toString());
        }
    }
    list.add("file_chunks");
    list.add("file_size");
    redisTemplate.opsForHash().delete(md5Value, list.toArray());
}

9、分片上传失败后,如何在断点处继续上传?

在第3个问题、第5个问题中,已经解决了这个问题,webuploader内部一个command(before-send)会触发,这时计算分片文件的md5值,并携带分片文件的md5值调用后台的校验接口;如果已上传,那么会直接跳过当前分片上传接口的调用;如果未上传,则会只上传未上传的的那个分片文件;

10、上传的进度条是怎么实现的?

webuploader的uploadProgress事件在上传过程中触发,会携带上传进度参数;

// 文件上传过程中创建进度条实时显示
uploader.on('uploadProgress', function (file, percentage) {
    var $li = $('#' + file.id),
        $percent = $li.find('.progress .progress-bar');
    if (!$percent.length) {
        $percent = $('<div class="progress progress-striped active">' +
            '<div class="progress-bar" role="progressbar" style="width: 0%">' +
            '</div>' +
            '</div>').appendTo($li).find('.progress-bar');
    }
    $percent.css('width', percentage * 100 + '%');
});

总结

想要完整说清楚一件事,确实不容易,不知道我究间说明白了没,我感觉我是说明白了,评论区告诉我吧。

1、对于后端来说,大部分时候写的程序都是同步顺序执行的,但前端的异步执行很常见,通过这篇文章又重新学习了promise、deferred的使用;

2、不要以为看懂了一篇文章,就真的懂了,纸上得来终觉浅,绝知此事须躬行,还是得上手自己验证一翻,别人说的未必是对的,或者说在作者当时的场景下是对的,如何确定你的场景和他的是否相同?所以小编这里希望,大家多提问题,共同讨论,共同进步。

下面附上所有完整的的示例文件以供小伙伴们参考:

FileController.java

@RestController
@RequestMapping("/file")
@Slf4j
public class FileController {
    @Resource
    private RedisTemplate redisTemplate;
    /**
     * 检验分片文件是否已经上传过
     *
     * @param fileMd5  整体文件md5值
     * @param chunk    当前上传分片在所有分片文件中索引位置
     * @param chunkMd5 分片文件的md5值
     * @return
     */
    @PostMapping("/check")
    public boolean check(String fileMd5, String chunk, String chunkMd5) {
        Object o = redisTemplate.opsForHash().get(fileMd5, "chunk_md5_" + chunk);
        if (chunkMd5.equals(o)) {
            return true;
        }
        return false;
    }

    /**
     * 分片上传接口
     *
     * @param request
     * @param multipartFile
     * @return
     * @throws IOException
     */
    @PostMapping("/upload")
    public String upload(HttpServletRequest request, MultipartFile multipartFile) {
        log.info("分片上传....");
        Map<String, String> requestParam = this.doRequestParam(request);
        String md5Value = requestParam.get("md5Value");//整体文件的md5值
        String chunkIndex = requestParam.get("chunk");//分片在所有分片文件中索引位置
        String start = requestParam.get("start");//当前分片在整个数据文件中的开始位置
        String end = requestParam.get("end");//当前分片在整个数据文件中的结束位置
        String chunks = requestParam.get("chunks");//整体文件总共被分了多少片
        String fileSize = requestParam.get("size");//整体文件大小
        String chunkMd5 = requestParam.get("chunkMd5");//分片文件的md5值
        String userDir = System.getProperty("user.dir");
        String chunkFilePath = userDir + File.separator + chunkMd5 + "_" + chunkIndex;
        File file = new File(chunkFilePath);
        try {
            multipartFile.transferTo(file);
            Map<String, String> map = new HashMap<>();
            map.put("chunk_location_" + chunkIndex, chunkFilePath);//分片存储路径
            map.put("chunk_start_end_" + chunkIndex, start + "_" + end);
            map.put("file_size", fileSize);
            map.put("file_chunks", chunks);
            map.put("chunk_md5_" + chunkIndex, chunkMd5);
            redisTemplate.opsForHash().putAll(md5Value, map);
        } catch (IOException e) {
            e.printStackTrace();
        } catch (IllegalStateException e) {
            e.printStackTrace();
        }

        return "success";
    }

    /**
     * 合并分片文件接口
     *
     * @param request
     * @return
     * @throws IOException
     */
    @PostMapping("/merge")
    public String merge(HttpServletRequest request) throws IOException {
        log.info("合并分片...");
        Map<String, String> requestParam = this.doRequestParam(request);
        String md5Value = requestParam.get("md5Value");
        String originalFilename = requestParam.get("originalFilename");
        //校验切片是否己经上传完毕
        boolean flag = this.checkBeforeMerge(md5Value);
        if (!flag) {
            return "切片未完全上传";
        }
        //检查是否已经有相同md5值的文件上传;主要是对名字不同,而实际文件相同的文件,直接对原文件进行复制;
        Object file_location = redisTemplate.opsForHash().get(md5Value, "file_location");
        if (file_location != null) {
            String source = file_location.toString();
            File file = new File(source);
            if (!file.getName().equals(originalFilename)) {
                File target = new File(System.getProperty("user.dir") + File.separator + originalFilename);
                Files.copy(file.toPath(), target.toPath());
                return "success";
            }

        }
        //这里要特别注意,合并分片的时候一定要按照分片的索引顺序进行合并,否则文件无法使用;
        Integer file_chunks = Integer.valueOf(redisTemplate.opsForHash().get(md5Value, "file_chunks").toString());
        String userDir = System.getProperty("user.dir");
        File writeFile = new File(userDir + File.separator + originalFilename);
        OutputStream outputStream = new FileOutputStream(writeFile);
        InputStream inputStream = null;
        for (int i = 0; i < file_chunks; i++) {
            String tmpPath = redisTemplate.opsForHash().get(md5Value,"chunk_location_" + i).toString();
            File readFile = new File(tmpPath);
            inputStream = new FileInputStream(readFile);
            byte[] bytes = new byte[1024 * 1024];
            while ((inputStream.read(bytes) != -1)) {
                outputStream.write(bytes);
            }
            if (inputStream != null) {
                inputStream.close();
            }
        }
        if (outputStream != null) {
            outputStream.close();
        }
        redisTemplate.opsForHash().put(md5Value, "file_location", userDir + File.separator + originalFilename);
        this.delTmpFile(md5Value);
        return "success";
    }

    @GetMapping("/download")
    public String download(String fileName, HttpServletResponse response) throws IOException {
        response.setContentType("application/octet-stream");
        response.setHeader("Content-Disposition", "attachment; filename=" + URLEncoder.encode(fileName, "UTF-8"));
        String userDir = System.getProperty("user.dir");
        File file = new File(userDir + File.separator + fileName);
        InputStream inputStream = new FileInputStream(file);
        byte[] bytes = new byte[1024 * 1024];
        ServletOutputStream outputStream = response.getOutputStream();
        while (inputStream.read(bytes) != -1) {
            outputStream.write(bytes);
        }
        inputStream.close();
        outputStream.close();
        return "success";
    }


    private void delTmpFile(String md5Value) throws JsonProcessingException {
        Map map = redisTemplate.opsForHash().entries(md5Value);
        List<String> list = new ArrayList<>();
        for (Object hashKey : map.keySet()) {
            if (hashKey.toString().startsWith("chunk_location")) {
                String filePath = map.get(hashKey).toString();
                File file = new File(filePath);
                boolean flag = file.delete();
                list.add(hashKey.toString());
                log.info("delete:" + filePath + ",:" + flag);
            }
            if (hashKey.toString().startsWith("chunk_start_end_")) {
                list.add(hashKey.toString());
            }
            if (hashKey.toString().startsWith("chunk_md5_")) {
                list.add(hashKey.toString());
            }
        }
        list.add("file_chunks");
        list.add("file_size");
        redisTemplate.opsForHash().delete(md5Value, list.toArray());
    }

    private Map<String, String> doRequestParam(HttpServletRequest request) {
        Map<String, String> requestParam = new HashMap<>();
        Enumeration<String> parameterNames = request.getParameterNames();
        while (parameterNames.hasMoreElements()) {
            String paramName = parameterNames.nextElement();
            String paramValue = request.getParameter(paramName);
            requestParam.put(paramName, paramValue);
            log.info(paramName + ":" + paramValue);
        }
        log.info("----------------------------");
        return requestParam;
    }

    /**
     * 合并分片前检验文件整体的所有分片是否全部上传
     *
     * @param key
     * @return
     */
    private boolean checkBeforeMerge(String key) {
        Map map = redisTemplate.opsForHash().entries(key);
        Object file_chunks = map.get("file_chunks");
        int i = 0;
        for (Object hashKey : map.keySet()) {
            if (hashKey.toString().startsWith("chunk_md5_")) {
                ++i;
            }
        }
        if (Integer.valueOf(file_chunks.toString())==(i)) {
            return true;
        }
        return false;
    }
}

webuploader2.html

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Title</title>
    <script type="text/javascript" src="https://code.jquery.com/jquery-3.1.1.min.js"></script>
    <script type="text/javascript" src="http://localhost:8080/lib/webuploader.js"></script>
    <link rel="stylesheet" href="lib/style.css"></link>
    <link rel="stylesheet" href="lib/webuploader.css"></link>
    <link rel="stylesheet" href="lib/bootstrap.min.css"></link>
    <link rel="stylesheet" href="lib/bootstrap-theme.min.css"></link>
    <link rel="stylesheet" href="lib/font-awesome.min.css"></link>
    <!--    <script type="text/javascript" src="http://localhost:8080/lib/spark-md5.min.js"></script>-->
</head>
<body>
<div style="width: 60%">
    <div id="uploader" class="wu-example">
        <!--用来存放文件信息-->
        <div id="thelist" class="uploader-list"></div>
        <div class="btns">
            <div id="picker">选择文件</div>
            <button id="ctlBtn" class="btn btn-default">开始上传</button>
        </div>
    </div>
    <div id="log">
    </div>
</div>

</body>
<script type="text/javascript">
    var file_md5 = '';
    var uploader;
    //md5FlagMap用于存储文件md5计算完成的标志位;多个文件时,分别设置标志位,key是文件名,value是true或false;
    var md5FlagMap = new Map();
    WebUploader.Uploader.register({
        "add-file": "addFile",
        "before-send-file": "beforeSendFile",
        "before-send": "beforeSend",
        "after-send-file": "afterSendFile"
    }, {
        addFile: function (file) {
            console.log('1', file)
        },
        beforeSendFile: function (file) {
            console.log('2', file)
            //
            // md5FlagMap.set(file.name, false);//文件md5值计算的标志位默认为false
            // var deferred = WebUploader.Deferred();//deferred用于监控异步计算文件md5值这个异步操作的执行状态
            // uploader.md5File(file, 0, file.size - 1).then(function (fileMd5) {
            //     file.wholeMd5 = fileMd5;
            //     file_md5 = fileMd5;
            //     deferred.resolve(file.name);//文件md5值计算完成后,更新状态为已完成,这时 deferred.done()会触发
            // })
            // //文件越大,文件的md5值计算用时越长,因此md5的计算搞成异步执行是合理的;如果异步执行比较慢的话,会顺序执行到这里
            // $('#thelist').append('<div id="' + file.id + '" class="item">' +
            //     '<h4 class="info">' + file.name + '</h4>' +
            //     '<p class="state">开始计算大文件的md5......<br/></p>' +
            //     '</div>')
            // //文件的md5计算完成,会触发这里的回调函数,
            // deferred.done(function (name) {
            //     md5FlagMap.set(name, true);//更新md5计算标志位为true
            //     $('#' + file.id).find('p.state').append('大文件的md5计算完成<br/>');
            // })
            // return deferred.promise();
        },
        beforeSend: function (block) {
            console.log(3)
            var file = block.file;
            var deferred = WebUploader.Base.Deferred();
            (new WebUploader.Uploader()).md5File(file, block.start, block.end).then(function (value) {
                $.ajax({
                    url: 'http://localhost:8080/file/check',//检查当前分片是否已经上传
                    method: 'post',
                    data: {chunkMd5: value, fileMd5: file_md5, chunk: block.chunk},
                    success: function (res) {
                        if (res) {
                            deferred.reject();
                        } else {
                            deferred.resolve(value);
                        }
                    }
                });

            })
            deferred.done(function (value) {
                block.chunkMd5 = value;

            })
            return deferred;
        },
        afterSendFile: function (file) {
            console.log('4', file)
        }
    })

    uploader = WebUploader.create({
        // swf文件路径
        swf: 'http://localhost:8080/lib/Uploader.swf',
        // 分片文件上传接口
        server: 'http://localhost:8080/file/upload',
        // 选择文件的按钮。可选。
        pick: '#picker',
        fileVal: 'multipartFile',//后端用来接收上传文件的参数名称
        chunked: true,//开启分片上传
        chunkSize: 1024 * 1024 * 10,//设置分片大小
        chunkRetry: 2,//设置重传次数,有的时候由于网络原因,分片上传的会失败,这里即是失败允许重的次数
        threads: 3//允许同时最大上传进程数
    });

    /**
     * 当有文件被添加进队列后触发
     * 主要逻辑:1、文件被添加到队列后,开始计算文件的md5值;
     * 2、md5的计算过程是异步操作,并且文件越大,计算用时越长;
     * 3、变量md5FlagMap是文件md5值计算的标志位,计算完成后,设置当前文件的md5Flag为true
     */
    uploader.on('fileQueued', function (file) {
        md5FlagMap.set(file.name, false);//文件md5值计算的标志位默认为false
        var deferred = WebUploader.Deferred();//deferred用于监控异步计算文件md5值这个异步操作的执行状态
        uploader.md5File(file, 0, file.size - 1).then(function (fileMd5) {
            file.wholeMd5 = fileMd5;
            file_md5 = fileMd5;
            deferred.resolve(file.name);//文件md5值计算完成后,更新状态为已完成,这时 deferred.done()会触发
        })
        //文件越大,文件的md5值计算用时越长,因此md5的计算搞成异步执行是合理的;如果异步执行比较慢的话,会顺序执行到这里
        $('#thelist').append('<div id="' + file.id + '" class="item">' +
            '<h4 class="info">' + file.name + '</h4>' +
            '<p class="state">开始计算大文件的md5......<br/></p>' +
            '</div>')
        //文件的md5计算完成,会触发这里的回调函数,
        deferred.done(function (name) {
            md5FlagMap.set(name, true);//更新md5计算标志位为true
            $('#' + file.id).find('p.state').append('大文件的md5计算完成<br/>');
        })
        return deferred.promise();
    })

    // 分片模式下,当文件的分块在发送前触发
    uploader.on('uploadBeforeSend', function (block, data) {
        var file = block.file;
        //data可以携带参数到后端
        data.originalFilename = file.originalFilename;//文件名字
        data.md5Value = file.wholeMd5;//文件整体的md5值
        data.start = block.start;//分片数据块在整体文件的开始位置
        data.end = block.end;//分片数据块在整体文件的结束位置
        data.chunk = block.chunk;//分片的索引位置
        data.chunks = block.chunks;//整体文件总共分了多少征
        data.chunkMd5 = block.chunkMd5;//分片文件md5值
    });
    // 文件上传过程中创建进度条实时显示
    uploader.on('uploadProgress', function (file, percentage) {
        var $li = $('#' + file.id),
            $percent = $li.find('.progress .progress-bar');
        if (!$percent.length) {
            $percent = $('<div class="progress progress-striped active">' +
                '<div class="progress-bar" role="progressbar" style="width: 0%">' +
                '</div>' +
                '</div>').appendTo($li).find('.progress-bar');
        }
        $percent.css('width', percentage * 100 + '%');
    });
    //当文件上传成功时触发
    uploader.on('uploadSuccess', function (file) {
        //大文件的所有分片上传成功后,请求后端对分片进行合并
        $.ajax({
            url: 'http://localhost:8080/file/merge',
            method: 'post',
            data: {'md5Value': file.wholeMd5, 'originalFilename': file.name},
            success: function (res) {
                alert('大文件上传成功!')
            }
        })
        $('#' + file.id).find('p.state').append('文件上传成功<br/>');
    });
    //当文件上传出错时触发
    uploader.on('uploadError', function (file) {
        $('#' + file.id).find('p.state').text('上传出错<br/>');
    });
    //不管成功或者失败,文件上传完成时触发
    uploader.on('uploadComplete', function (file) {
        $('#' + file.id).find('.progress').fadeOut();
    });
    //开始上传按钮被点击时触发
    $('#ctlBtn').click(function () {
        //md5FlagMap存储有文件md5计算的标志位;
        // 同时上传多个文件时,上传前要判断一下文件的md5是否计算完成,
        // 如果有未计算完成的,则继续等待计算结果;
        //文件上传标志位,如果多个文件有一个没有完成md5计算则不能开始上传;这里在实际业务中可以更换成其他交互样式,酌情优化为哪个文件的md5计算完成,则开始哪个文件的上传;
        var uploadFloag = true;
        md5FlagMap.forEach(function (value, key) {
            if (!value) {
                uploadFloag = false;
                alert('md5计算中...,请稍侯')//文件md5计算未完成,会弹出弹窗提示;
            }
        })
        if (uploadFloag) {
            uploader.upload();//文件md5计算完成后,开始分片上传;
        }
    })
</script>
</html>

Guess you like

Origin blog.csdn.net/fox9916/article/details/129562440