Java realizes the function of uploading large files in fragments (both front and back ends, the code can be run directly after configuration is down)

problem

The problem solved by the project is mainly java to realize the function of uploading in pieces. Description of the problem: The
host used a multipart file to upload the video file to the server in the company's recent project, and then save it to the database with fastdfs. It is found that when the uploaded video file is too large, the buf/cache of the server memory will be occupied very high (several G). Although it can be manually cleared, it still cannot solve the problem of the video uploading memory from the root cause.

Insert picture description here


Ideas to solve the problem

lz spent 100 points on the problem mentioned on csdn: in a hurry, ask java to upload large files and take up too much jvm solution/idea

1. mmf, through the memory mapped file memory mapped file to segment the data into mysql or other databases, not suitable, slightly
2. FTP on the server, and then use ftp to implement the code
3. The front end implements the file through vue-upload The segment upload function, the backend uses fastdfs's own segmentation function to achieve data storage (lz solution)



Problems solved/functions implemented

Front-end functions : simple-uploader.js (also known as Uploader) is an upload library that supports multiple concurrent uploads, folders, drag and drop, pause and resume, second upload, upload in chunks, automatic retransmission of errors, manual retransmission , Progress, remaining time, upload speed and other characteristics; the upload library relies on HTML5 File API.

Back-end implementation functions : springboot integrates fastDfs, redis implementation, .local path file segment upload 2. fastDfs file upload, download, segment upload



Project realization technology

Let me talk about the general things used to prevent your database or other reasons from being unavailable, leading to a waste of time.
Technology used: simple-uploader (front-end) + fastdfs (database) + springboot (project framework) + Redis

Project source address:

  • Front end: simple-uploader
  • Backend: fastDfs-demo
    Note: I am very grateful to the author who provided the open source project. The host 's code is changed based on two projects, which are source codes.
  • Owner's own code:
    link: address
    extraction code: ey36



Effect achieved

Insert picture description here
Insert picture description here



Implementation principle:

  • After the front end uses the sharding plugin, a request will be divided into multiple requests. Multiple upload requests are all fragmented requests. Divide the large file into multiple small pieces at a time. After the fragments are delivered to the server at a time, that is, after the upload is completed, a merge request needs to be sent to the server to let the server split multiple fragmented files Synthesize a file. When we upload a large file, it will be fragmented by the plugin. The ajax request is as follows:
    Insert picture description here

  • You can see that multiple upload requests have been initiated, let’s take a look at the specific parameters sent by upload
    Insert picture description here

  • The guid in the first configuration (content-disposition) and the access_token in the second configuration are the formData in the webuploader configuration, that is, the parameters passed to the server. The next few configurations are the file content, chunkNumber, chunkSize, currentChunkSize, etc. Among them, totalChunks is the total number of fragments, and chunkSize is the current number of fragments. 13 in the picture. When you see an upload request with a chunk of 130, it means this is the last upload request

  • After the sharding, the files have not been integrated, and the data looks like this:
    Insert picture description here

  • Backend verification
    1 In the callback of "Add File", read the file through FileReader, generate MD5, and send it to the backend
    2.1 If the backend directly returns the "Skip upload" field and the url of the file, the upload will be skipped. This is seconds Transfer;
    2.2 If the fragment information is returned in the background, this is a resuming transfer. The background will identify in each fragment whether the fragment has been uploaded, you need to judge in the callback of the fragment upload verification, if true, skip the fragment.
    3 When each segment is successfully uploaded, the background will return a field to determine whether it needs to be merged; in the "upload complete" callback, if this field is true, you need to send an ajax request to the background to request merge

Code sharing

The modified code of the project shared in front of the host

Back-end core code:

api layers:

@PostMapping(value = "/fastDfsChunkUpload", consumes = MediaType.MULTIPART_FORM_DATA_VALUE)
    public Map chunkUpload1(MultipartFileParam multipartFileParam, HttpServletResponse response) {
    
    
        Map<String, String> map = new HashMap<>();
        long chunk = multipartFileParam.getChunkNumber();
        long totalChunk = multipartFileParam.getTotalChunks();
        long chunkSize = multipartFileParam.getChunkSize();
        long historyUpload = (chunk - 1) * chunkSize;
        String md5 = multipartFileParam.getIdentifier();
        MultipartFile file = multipartFileParam.getFile();
        String fileName = FileUtil.extName(file.getOriginalFilename());
        StorePath path = null;
        String groundPath;

        try {
    
    
            if (chunk == 1) {
    
    
                path = appendFileStorageClient.uploadAppenderFile(UpLoadConstant.DEFAULT_GROUP, file.getInputStream(),
                        file.getSize(), fileName);
                if (path == null) {
    
    
                    map.put("result", "上传第一个就错了");
                    response.setStatus(500);
                    return map;
                } else {
    
    
                    redisUtil.setObject(UpLoadConstant.uploadChunkNum + md5, 1, cacheTime);
                    map.put("result", "上传成功");
                }
                groundPath = path.getPath();
                redisUtil.setObject(UpLoadConstant.fastDfsPath + md5, groundPath, cacheTime);

            } else {
    
    
                groundPath = (String) redisUtil.getObject(UpLoadConstant.fastDfsPath + md5);
                appendFileStorageClient.modifyFile(UpLoadConstant.DEFAULT_GROUP, groundPath, file.getInputStream(),
                        file.getSize(), historyUpload);
                Integer chunkNum = (Integer) redisUtil.getObject(UpLoadConstant.uploadChunkNum + md5);
                chunkNum = chunkNum + 1;
                redisUtil.setObject(UpLoadConstant.uploadChunkNum + md5, chunkNum, cacheTime);
            }
            Integer num = (Integer) redisUtil.getObject(UpLoadConstant.uploadChunkNum + md5);
            if (totalChunk == num) {
    
    
                response.setStatus(200);
                map.put("result", "上传成功");
                map.put("path", groundPath);
                redisUtil.del(UpLoadConstant.uploadChunkNum + md5);
                redisUtil.del(UpLoadConstant.fastDfsPath + md5);
            }
        } catch (FdfsIOException | SocketTimeoutException e) {
    
    
            response.setStatus(407);
            map.put("result", "重新发送");
            return map;
        } catch (Exception e) {
    
    
            e.printStackTrace();
            redisUtil.del(UpLoadConstant.uploadChunkNum + md5);
            redisUtil.del(UpLoadConstant.fastDfsPath + md5);
            response.setStatus(500);
            map.put("result", "upload error");
            return map;
        }
        System.out.println("result=" + map.get("result"));
        System.out.println("path=" + map.get("path"));
        return map;
    }

Entity class: MultipartFileParam

package com.dgut.fastdfs.entity;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;
import org.springframework.web.multipart.MultipartFile;

import java.io.Serializable;

/**
 * @author :陈文浩
 * @date :Created in 2019/12/15 12:27
 * @description:
 */
@Data
@AllArgsConstructor
@NoArgsConstructor
@ToString
public class MultipartFileParam implements Serializable {
    
    

    private String taskId;//文件传输任务ID
    private long chunkNumber;//当前为第几分片
    private long chunkSize;//每个分块的大小
    private long totalChunks;//分片总数
    private String identifier;//文件唯一标识
    private MultipartFile file;//分块文件传输对象

}

Tools: RedisUtil

package com.dgut.fastdfs.utils;


import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.stereotype.Component;


import java.util.List;
import java.util.concurrent.TimeUnit;

@Component
public class RedisUtil {
    
    

    @Autowired
    private RedisTemplate<String, Object> redisTemplate;


    //写入对象
    public boolean setObject(final String key, Object value, Integer expireTime) {
    
    
        try {
    
    

            redisTemplate.opsForValue().set(key, value);
            redisTemplate.expire(key, expireTime, TimeUnit.SECONDS);
            return true;
        } catch (Exception e) {
    
    
            e.printStackTrace();
            return false;
        }
    }

    //获取对象
    public Object getObject(final String key) {
    
    
        return key == null ? null : redisTemplate.opsForValue().get(key);
    }


    //写入集合
    public boolean setList(final String key, Object value, Integer expireTime) {
    
    
        try {
    
    
            redisTemplate.opsForList().rightPush(key, value);
            redisTemplate.expire(key, expireTime, TimeUnit.SECONDS);
            return true;
        } catch (Exception e) {
    
    
            e.printStackTrace();
            return false;
        }
    }

    //获取集合
    public List<Object> getList(final String key) {
    
    
        try {
    
    
            return redisTemplate.opsForList().range(key, 0, -1);
        } catch (Exception e) {
    
    
            e.printStackTrace();
            return null;
        }
    }

    //判断时候存在key
    public boolean hasKey(final String key) {
    
    
        try {
    
    
            return redisTemplate.hasKey(key);
        } catch (Exception e) {
    
    
            e.printStackTrace();
        }
        return false;
    }

    //删除key
    public void del(final String key) {
    
    
        if (hasKey(key)) {
    
    
            redisTemplate.delete(key);
        }
    }


}

The main places to modify the front end

  • API to visit
  • The number of shards uploaded at the same time The
    Insert picture description here
    project code has been uploaded to the Baidu network disk, you can download it yourself

end

lz spent two hours sorting out the resources, forget everyone after seeing it, don’t forget to like it, thank you


Don’t be melancholy in the next life

Guess you like

Origin blog.csdn.net/qq_42910468/article/details/108607427