springboot integrates MinIO to realize video fragment upload/breakpoint resume

1 Introduction

Previously, I did a mock short video development on the MOOC website. There are many rough implementations in it. For example, the video upload part is directly uploaded by the front-end cloud service, without considering the quality of the customer's network environment. If a video is about to be uploaded , but the network is disconnected and the upload is not completed, the customer needs to re-upload, which is extremely bad for the user experience.

Then we can resume the upload of video files. During the upload process, if there is a network exception or program crash that causes the file upload to fail, the unfinished part will continue to be uploaded from the breakpoint record. The resume upload depends on For MD5 and fragment upload, the process of fragment upload in this demo is as shown in the figure,
insert image description here
through the unique MD5 of the file, query whether the SysUploadTask has been created before in the database, if it exists, return TaskInfo directly; if not, obtain the UploadId through amazonS3 And create a new SysUploadTask to return. After the front-end divides the file into slices, it obtains a pre-address of each slice through the server, and then the front-end directly initiates a real upload request to the minio server to avoid occupying the bandwidth of the application server during uploading and affecting system stability. Finally, a merge request is initiated to the backend server.

2. Database structure

insert image description here

3. Backend implementation

3.1. Obtain whether the same file exists according to MD5

Controller layer

    /**
     * 查询是否上传过,若存在,返回TaskInfoDTO
     * @param identifier 文件md5
     * @return
     */
    @GetMapping("/{identifier}")
    public GraceJSONResult taskInfo (@PathVariable("identifier") String identifier) {
    
    
        return GraceJSONResult.ok(sysUploadTaskService.getTaskInfo(identifier));
    }

Service layer

    /**
     * 查询是否上传过,若存在,返回TaskInfoDTO
     * @param identifier
     * @return
     */
    public TaskInfoDTO getTaskInfo(String identifier) {
    
    
        SysUploadTask task = getByIdentifier(identifier);
        if (task == null) {
    
    
            return null;
        }
        TaskInfoDTO result = new TaskInfoDTO().setFinished(true).setTaskRecord(TaskRecordDTO.convertFromEntity(task)).setPath(getPath(task.getBucketName(), task.getObjectKey()));

        boolean doesObjectExist = amazonS3.doesObjectExist(task.getBucketName(), task.getObjectKey());
        if (!doesObjectExist) {
    
    
            // 未上传完,返回已上传的分片
            ListPartsRequest listPartsRequest = new ListPartsRequest(task.getBucketName(), task.getObjectKey(), task.getUploadId());
            PartListing partListing = amazonS3.listParts(listPartsRequest);
            result.setFinished(false).getTaskRecord().setExitPartList(partListing.getParts());
        }
        return result;
    }

3.2. Initialize an upload task

Controller layer

    /**
     * 创建一个上传任务
     * @return
     */
    @PostMapping
    public GraceJSONResult initTask (@Valid @RequestBody InitTaskParam param) {
    
    
        return GraceJSONResult.ok(sysUploadTaskService.initTask(param));
    }

Service layer

	/**
     * 初始化一个任务
     */
    public TaskInfoDTO initTask(InitTaskParam param) {
    
    

        Date currentDate = new Date();
        String bucketName = minioProperties.getBucketName();
        String fileName = param.getFileName();
        String suffix = fileName.substring(fileName.lastIndexOf(".")+1, fileName.length());
        String key = StrUtil.format("{}/{}.{}", DateUtil.format(currentDate, "YYYY-MM-dd"), IdUtil.randomUUID(), suffix);
        String contentType = MediaTypeFactory.getMediaType(key).orElse(MediaType.APPLICATION_OCTET_STREAM).toString();
        ObjectMetadata objectMetadata = new ObjectMetadata();
        objectMetadata.setContentType(contentType);
        InitiateMultipartUploadResult initiateMultipartUploadResult = amazonS3
                .initiateMultipartUpload(new InitiateMultipartUploadRequest(bucketName, key).withObjectMetadata(objectMetadata));
        String uploadId = initiateMultipartUploadResult.getUploadId();

        SysUploadTask task = new SysUploadTask();
        int chunkNum = (int) Math.ceil(param.getTotalSize() * 1.0 / param.getChunkSize());
        task.setBucketName(minioProperties.getBucketName())
                .setChunkNum(chunkNum)
                .setChunkSize(param.getChunkSize())
                .setTotalSize(param.getTotalSize())
                .setFileIdentifier(param.getIdentifier())
                .setFileName(fileName)
                .setObjectKey(key)
                .setUploadId(uploadId);
        sysUploadTaskMapper.insert(task);
        return new TaskInfoDTO().setFinished(false).setTaskRecord(TaskRecordDTO.convertFromEntity(task)).setPath(getPath(bucketName, key));
    }

3.3. Obtain the pre-signed upload address of each segment

Controller layer

    /**
     * 获取每个分片的预签名上传地址
     * @param identifier
     * @param partNumber
     * @return
     */
    @GetMapping("/{identifier}/{partNumber}")
    public GraceJSONResult preSignUploadUrl (@PathVariable("identifier") String identifier, @PathVariable("partNumber") Integer partNumber) {
    
    
        SysUploadTask task = sysUploadTaskService.getByIdentifier(identifier);
        if (task == null) {
    
    
            return GraceJSONResult.error("分片任务不存在");
        }
        Map<String, String> params = new HashMap<>();
        params.put("partNumber", partNumber.toString());
        params.put("uploadId", task.getUploadId());
        return GraceJSONResult.ok(sysUploadTaskService.genPreSignUploadUrl(task.getBucketName(), task.getObjectKey(), params));
    }

Service layer

    /**
     * 生成预签名上传url
     * @param bucket 桶名
     * @param objectKey 对象的key
     * @param params 额外的参数
     * @return
     */
    public String genPreSignUploadUrl(String bucket, String objectKey, Map<String, String> params) {
    
    
        Date currentDate = new Date();
        Date expireDate = DateUtil.offsetMillisecond(currentDate, PRE_SIGN_URL_EXPIRE.intValue());
        GeneratePresignedUrlRequest request = new GeneratePresignedUrlRequest(bucket, objectKey)
                .withExpiration(expireDate).withMethod(HttpMethod.PUT);
        if (params != null) {
    
    
            params.forEach((key, val) -> request.addRequestParameter(key, val));
        }
        URL preSignedUrl = amazonS3.generatePresignedUrl(request);
        return preSignedUrl.toString();
    }

3.4. Merge fragments

Controller layer

    /**
     * 合并分片
     * @param identifier
     * @return
     */
    @PostMapping("/merge/{identifier}")
    public GraceJSONResult merge (@PathVariable("identifier") String identifier) {
    
    
        sysUploadTaskService.merge(identifier);
        return GraceJSONResult.ok();
    }

Service layer

    /**
     * 合并分片
     * @param identifier
     */
    public void merge(String identifier) {
    
    
        SysUploadTask task = getByIdentifier(identifier);
        if (task == null) {
    
    
            throw new RuntimeException("分片任务不存");
        }

        ListPartsRequest listPartsRequest = new ListPartsRequest(task.getBucketName(), task.getObjectKey(), task.getUploadId());
        PartListing partListing = amazonS3.listParts(listPartsRequest);
        List<PartSummary> parts = partListing.getParts();
        if (!task.getChunkNum().equals(parts.size())) {
    
    
            // 已上传分块数量与记录中的数量不对应,不能合并分块
            throw new RuntimeException("分片缺失,请重新上传");
        }
        CompleteMultipartUploadRequest completeMultipartUploadRequest = new CompleteMultipartUploadRequest()
                .withUploadId(task.getUploadId())
                .withKey(task.getObjectKey())
                .withBucketName(task.getBucketName())
                .withPartETags(parts.stream().map(partSummary -> new PartETag(partSummary.getPartNumber(), partSummary.getETag())).collect(Collectors.toList()));
        CompleteMultipartUploadResult result = amazonS3.completeMultipartUpload(completeMultipartUploadRequest);
    }

4. Fragmented file cleaning problem

How to clean up the fragments when half of the video is uploaded and not uploaded.

You can consider adding a status field to the sys_upload_task table to indicate whether to merge fragments. The default is false. After the merge request is completed, it will be changed to true, and a scheduled task will be used to regularly clean up the records whose status is false. In addition, MinIO itself will regularly clean up temporarily uploaded fragments.

5. Demo address

springboot integrates MinIO to realize video fragment upload/breakpoint resume

Guess you like

Origin blog.csdn.net/weixin_44153131/article/details/129249169