Vue+springboot upload large files

foreword

As we all know, uploading large files is a very troublesome thing. If a road goes dark, upload the file directly at one time. This can be done for small files, but for large files, there may be network problems, request response time, etc. Wait until the file upload fails, so this time I will teach you how to upload large files with the vue+srpingboot project

logic

The following logic should be considered when uploading large files:

  1. Large file uploads generally require 将文件切片(chunk)上传that all slices are then merged into a complete file. It can be implemented according to the following logic:
  1. The front end selects the file to be uploaded on the page, and 使用Blob.slice方法对文件进行切片generally the size of each slice is a fixed value (such as 5MB), and records how many slices there are in total.
  1. To upload the slices to the backend service separately, you can use libraries such as XMLHttpRequest or Axios to send Ajax requests. 对于每个切片,需要包含三个参数:当前切片索引(从0开始)、切片总数、切片文件数据.
  1. backend service 接收到切片后,保存到指定路径下的临时文件中,并记录已上传的切片索引和上传状态. If a slice upload fails, the front end is notified to retransmit the slice.
  1. When all slices are uploaded successfully, the backend service reads them 所有切片内容并将其合并为完整的文件. File merging can be achieved using java.io.SequenceInputStream and BufferedOutputStream.
  1. Finally, return the response result of file upload success to the front end.

front end

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>File Upload</title>
</head>
<body>
    <input type="file" id="fileInput">
    <button onclick="upload()">Upload</button>
    <script>
        function upload() {
      
      
            let file = document.getElementById("fileInput").files[0];
            let chunkSize = 5 * 1024 * 1024; // 切片大小为5MB
            let totalChunks = Math.ceil(file.size / chunkSize); // 计算切片总数
            let index = 0;
            while (index < totalChunks) {
      
      
                let chunk = file.slice(index * chunkSize, (index + 1) * chunkSize);
                let formData = new FormData();
                formData.append("file", chunk);
                formData.append("index", index);
                formData.append("totalChunks", totalChunks);
                // 发送Ajax请求上传切片
                $.ajax({
      
      
                    url: "/uploadChunk",
                    type: "POST",
                    data: formData,
                    processData: false,
                    contentType: false,
                    success: function () {
      
      
                        if (++index >= totalChunks) {
      
      
                            // 所有切片上传完成,通知服务端合并文件
                            $.post("/mergeFile", {
      
      fileName: file.name}, function () {
      
      
                                alert("Upload complete!");
                            })
                        }
                    }
                });
            }
        }
    </script>
</body>
</html>

rear end

controller layer:

@RestController
public class FileController {
    
    

    @Value("${file.upload-path}")
    private String uploadPath;

    @PostMapping("/uploadChunk")
    public void uploadChunk(@RequestParam("file") MultipartFile file,
                            @RequestParam("index") int index,
                            @RequestParam("totalChunks") int totalChunks) throws IOException {
    
    
        // 以文件名+切片索引号为文件名保存切片文件
        String fileName = file.getOriginalFilename() + "." + index;
        Path tempFile = Paths.get(uploadPath, fileName);
        Files.write(tempFile, file.getBytes());
        // 记录上传状态
        String uploadFlag = UUID.randomUUID().toString();
        redisTemplate.opsForList().set("upload:" + fileName, index, uploadFlag);
        // 如果所有切片已上传,则通知合并文件
        if (isAllChunksUploaded(fileName, totalChunks)) {
    
    
            sendMergeRequest(fileName, totalChunks);
        }
    }

    @PostMapping("/mergeFile")
    public void mergeFile(String fileName) throws IOException {
    
    
        // 所有切片均已成功上传,进行文件合并
        List<File> chunkFiles = new ArrayList<>();
        for (int i = 0; i < getTotalChunks(fileName); i++) {
    
    
            String chunkFileName = fileName + "." + i;
            Path tempFile = Paths.get(uploadPath, chunkFileName);
            chunkFiles.add(tempFile.toFile());
        }
        Path destFile = Paths.get(uploadPath, fileName);
        try (OutputStream out = Files.newOutputStream(destFile);
             SequenceInputStream seqIn = new SequenceInputStream(Collections.enumeration(chunkFiles));
             BufferedInputStream bufIn = new BufferedInputStream(seqIn)) {
    
    
            byte[] buffer = new byte[1024];
            int len;
            while ((len = bufIn.read(buffer)) > 0) {
    
    
                out.write(buffer, 0, len);
            }
        }
        // 清理临时文件和上传状态记录
        for (int i = 0; i < getTotalChunks(fileName); i++) {
    
    
            String chunkFileName = fileName + "." + i;
            Path tempFile = Paths.get(uploadPath, chunkFileName);
            Files.deleteIfExists(tempFile);
            redisTemplate.delete("upload:" + chunkFileName);
        }
    }

    private int getTotalChunks(String fileName) {
    
    
        // 根据文件名获取总切片数
        return Objects.requireNonNull(Paths.get(uploadPath, fileName).toFile().listFiles()).length;
    }

    private boolean isAllChunksUploaded(String fileName, int totalChunks) {
    
    
        // 判断所有切片是否已都上传完成
        List<String> uploadFlags = redisTemplate.opsForList().range("upload:" + fileName, 0, -1);
        return uploadFlags != null && uploadFlags.size() == totalChunks;
    }

    private void sendMergeRequest(String fileName, int totalChunks) {
    
    
        // 发送合并文件请求
        new Thread(() -> {
    
    
            try {
    
    
                URL url = new URL("http://localhost:8080/mergeFile");
                HttpURLConnection conn = (HttpURLConnection) url.openConnection();
                conn.setRequestMethod("POST");
                conn.setDoOutput(true);
                conn.setDoInput(true);
                conn.setRequestProperty("Content-Type", "application/x-www-form-urlencoded;charset=utf-8");
                OutputStream out = conn.getOutputStream();
                String query = "fileName=" + fileName;
                out.write(query.getBytes());
                out.flush();
                out.close();
                BufferedReader br = new BufferedReader(new InputStreamReader(conn.getInputStream(), StandardCharsets.UTF_8));
                while (br.readLine() != null) ;
                br.close();
            } catch (IOException e) {
    
    
                e.printStackTrace();
            }
        }).start();
    }

    @Autowired
    private RedisTemplate<String, Object> redisTemplate;
}

Among them, file.upload-path为文件上传的保存路径, can be configured in application.properties or application.yml. At the same time, the Bean of RedisTemplate needs to be added to record the upload status.

RedisTemplate configuration

If you need to use RedisTemplate, you need to introduce the following package

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

At the same time configure redis information in yml

spring.redis.host=localhost
spring.redis.port=6379
spring.redis.database=0

Then use it like this in your own class

@Component
public class myClass {
    
    

    @Autowired
    private RedisTemplate<String, Object> redisTemplate;

    public void set(String key, Object value) {
    
    
        redisTemplate.opsForValue().set(key, value);
    }

    public Object get(String key) {
    
    
        return redisTemplate.opsForValue().get(key);
    }
}

Precautions

It is necessary 控制每次上传的切片大小to take into account the upload speed and stability, and avoid taking up too many server resources or failing to upload due to unstable network.

切片上传存在先后顺序, it is necessary to ensure that all slices are uploaded before merging, otherwise the file may be incomplete or the file may be merged incorrectly.

上传完成后需要及时清理临时文件, to avoid server crashes due to taking up too much disk space. A periodic task can be set up to clean up expired temporary files.

epilogue

The above is the logic of vue+springboot uploading large files

Guess you like

Origin blog.csdn.net/xc9711/article/details/130266016