Xuecheng online notes + stepping on the pit (5) - [Media Assets Module] upload video, resume from breakpoint

navigation: 

[Dark Horse Java Notes + Stepping on the Pit Summary] JavaSE+JavaWeb+SSM+SpringBoot+Riji Takeaway+SpringCloud+Dark Horse Tourism+Guli Mall+Xuecheng Online+Nioke Interview Questions

 

Table of contents

5 upload video 

5.1 Preview of uploading video process on the media assets management page

5.2 Breakpoint resume technology

5.2.1 What is resumable upload

5.2.2 Test chunking and merging, RandomAccessFile random flow

5.2.3 Video upload process

5.2.4 Test minio merged files

5.3 Interface definition, check file/block, upload block, merge block

5.4 Upload block Service

5.4.1 Checking files and chunks

5.4.2 Upload chunks

5.4.3 Improve the interface layer

Error reporting, Tomcat default upload file size limit is 1M, yml configuration file upload limit 

5.5 Merge block development

5.5.1 service development

5.5.2 Improvement of interface layer

5.5.2 Merge block test


5 upload video 

5.1 Preview of uploading video process on the media assets management page

1. The staff of the teaching institution enters the media resource management list to query the media resource files uploaded by themselves.

Click on "Media Management"

Enter the media asset management list page to query the media asset files uploaded by the organization.

2. Educational institution users click the "Upload Video" button on the "Media Assets Management" page.

Click "Upload Video" to open the upload page

3. Select the file to be uploaded and upload the file automatically.

4. The uploaded video will be processed automatically, and the video can be previewed after the processing is completed.

5.2 Breakpoint resume technology

5.2.1 What is resume from breakpoint

If a large file is about to be uploaded, the network is disconnected and the upload is not completed, the client needs to upload it again , and the user experience is very poor.

http:

When downloading or uploading, the download or upload task (a file or a compressed package) is artificially divided into several parts , and each part uses a thread for uploading or downloading . If there is a network failure, it can be uploaded or downloaded from the Start to continue uploading and downloading the unfinished part, and there is no need to start uploading and downloading from the beginning. Breakpoint resume can save operation time .

The process is as follows:

1. Divide the file into blocks before uploading on the front end

2. Upload one by one , re-upload after the upload is interrupted, and the uploaded parts do not need to be uploaded again

3. After each block is uploaded, the files are finally merged on the server side

5.2.2 Test chunking and merging, RandomAccessFile random flow

The process of file partitioning is as follows:

  • 1. Get the length of the source file
  • 2. Calculate the number of blocks according to the size of the set block file
  • 3. Read data from the source file and write data to each block file in turn.

The test code is as follows:

Random stream RandomAccessFile:

It is the most feature-rich file content access class in the Java input/output stream system. It provides many methods to access file content. It can not only read file content, but also output data to the file. Different from ordinary input/output streams, RandomAccessFile supports "random access" , and the program can directly jump to any place in the file to read and write data.


package com.xuecheng.media;
/**
 * @description 大文件处理测试
 */
public class BigFileTest {

    //分块测试,将视频按每块5m进行分块
    @Test
    public void testChunk() throws IOException {
        //源文件
        File sourceFile = new File("D:\\develop\\upload\\1.项目背景.mp4");
        //分块文件存储路径。这个路径得是真实存在的,否则会报错找不到路径
        String chunkFilePath = "D:\\develop\\upload\\chunk\\";
        //分块文件大小。这里设置成5M
        int chunkSize = 1024 * 1024 * 5;
        //分块文件个数。Math.ceil是向上取整
        int chunkNum = (int) Math.ceil(sourceFile.length() * 1.0 / chunkSize);
        //使用随机流从源文件读数据,向分块文件中写数据
        RandomAccessFile raf_r = new RandomAccessFile(sourceFile, "r");
        //缓存区
        byte[] bytes = new byte[1024];
        //遍历所有块
        for (int i = 0; i < chunkNum; i++) {
            //“D:\develop\upload\chunk\1”、“D:\develop\upload\chunk\2”...
            File chunkFile = new File(chunkFilePath + i);
            //分块文件写入流
            RandomAccessFile raf_rw = new RandomAccessFile(chunkFile, "rw");
            int len = -1;
            //每次写满一个字节数组
            while ((len=raf_r.read(bytes))!=-1){
                raf_rw.write(bytes,0,len);
                //当分块大小超过5m时停止在这一块写数据。不加这句的话会出现第一块大小和源文件一样,其余块大小都为0
                if(chunkFile.length()>=chunkSize){
                    break;
                }
            }
            raf_rw.close();
        }
        raf_r.close();
    }
}

Run the test: 

File merge process:

1. Find the files to be merged and sort them according to the order in which they were merged.

2. Create a merged file

3. Read data from the merged file in turn and write data to the merged file

Test code for file merging:

    //将分块进行合并
    @Test
    public void testMerge() throws IOException {
        //块文件目录
        File chunkFolder = new File("D:\\develop\\upload\\chunk");
        //源文件
        File sourceFile = new File("D:\\develop\\upload\\1.项目背景.mp4");
        //合并后的文件
        File mergeFile = new File("D:\\develop\\upload\\1.项目背景_2.mp4");

        //1.取出所有分块文件
        File[] files = chunkFolder.listFiles();
        //2.将数组转成list,以便于排序
        List<File> filesList = Arrays.asList(files);
        //3.对分块文件排序
        Collections.sort(filesList, new Comparator<File>() {
            @Override
            public int compare(File o1, File o2) {
                return Integer.parseInt(o1.getName())-Integer.parseInt(o2.getName());
            }
        });
        //向合并文件写的流
        RandomAccessFile raf_rw = new RandomAccessFile(mergeFile, "rw");
        //缓存区
        byte[] bytes = new byte[1024];
        //4.遍历每个分块,向合并的目标文件写
        for (File file : filesList) {
            //读分块的流
            RandomAccessFile raf_r = new RandomAccessFile(file, "r");
            int len = -1;
            while ((len=raf_r.read(bytes))!=-1){
                raf_rw.write(bytes,0,len);
            }
            raf_r.close();

        }
        raf_rw.close();
        //合并文件完成后对合并的文件md5校验
        FileInputStream fileInputStream_merge = new FileInputStream(mergeFile);
        FileInputStream fileInputStream_source = new FileInputStream(sourceFile);
        String md5_merge = DigestUtils.md5Hex(fileInputStream_merge);
        String md5_source = DigestUtils.md5Hex(fileInputStream_source);
        if(md5_merge.equals(md5_source)){
            System.out.println("文件合并成功");
        }

    }

5.2.3 Video upload process

The following figure shows the overall process of uploading videos:

1. The front end divides the file into blocks .

2. Before uploading the segmented file, the front end requests the media asset service to check whether the original file and the segmented file exist . If it already exists, there is no need to upload it again.

The basis for checking the existence of the file: the primary key of the media asset is the md5 value of the file, and if the md5 value of two files is equal, it is a file.

3. If the block file does not exist, the front end starts to upload

4. The front end requests the media asset service to upload the chunks.

5. The media asset service will be uploaded to MinIO in blocks .

Note: Both the minio file and the block storage path of the file should try to avoid being stored in the root directory . Here, the first two digits of the file name are set as the path.

6. The front end will request the media resource service to merge the blocks after uploading the blocks.

7. If the media resource service judges that the block upload is complete, it will request MinIO to merge the files .

8. After the merge is complete, check whether the merged file is complete . If it is complete, upload it and delete the block , otherwise delete the file.

5.2.4 Test minio merged files

1. Upload the chunked file to minio

//将分块文件上传至minio
@Test
public void uploadChunk(){
    String chunkFolderPath = "D:\\develop\\upload\\chunk\\";
    File chunkFolder = new File(chunkFolderPath);
    //获取所有分块文件。listFiles()方法返回该文件路径下所有文件数组
    File[] files = chunkFolder.listFiles();
    //将分块文件上传至minio
    for (int i = 0; i < files.length; i++) {
        try {
           UploadObjectArgs uploadObjectArgs = UploadObjectArgs.builder().bucket("testbucket").object("chunk/" + i).filename(files[i].getAbsolutePath()).build();
            minioClient.uploadObject(uploadObjectArgs);
            System.out.println("上传分块成功"+i);
        } catch (Exception e) {
          e.printStackTrace();
        }
    }

}

2. Merge files through minio

//合并文件,要求分块文件最小5M
@Test
public void test_merge() throws Exception {
    List<ComposeSource> sources = Stream.iterate(0, i -> ++i)
            .limit(6)
            .map(i -> ComposeSource.builder()
                    .bucket("testbucket")
                    .object("chunk/".concat(Integer.toString(i)))
                    .build())
            .collect(Collectors.toList());

    ComposeObjectArgs composeObjectArgs = ComposeObjectArgs.builder()
                    .bucket("testbucket").object("merge01.mp4")
                    .sources(sources).build();
    minioClient.composeObject(composeObjectArgs);

}
//清除分块文件
@Test
public void test_removeObjects(){
    //合并分块完成将分块文件清除
    List<DeleteObject> deleteObjects = Stream.iterate(0, i -> ++i)
            .limit(6)
            .map(i -> new DeleteObject("chunk/".concat(Integer.toString(i))))
            .collect(Collectors.toList());

    RemoveObjectsArgs removeObjectsArgs = RemoveObjectsArgs.builder().bucket("testbucket").objects(deleteObjects).build();
    Iterable<Result<DeleteError>> results = minioClient.removeObjects(removeObjectsArgs);
    results.forEach(r->{
        DeleteError deleteError = null;
        try {
            deleteError = r.get();
        } catch (Exception e) {
            e.printStackTrace();
        }
    });
}

Using minio to merge files reports an error: java.lang.IllegalArgumentException: source testbucket/chunk/0: size 1048576 must be greater than 5242880

Minio merged files have a minimum block size of 5M by default. We changed the block size to 5M and tested again.

5.3 Interface definition, check file/block, upload block, merge block

The contract with the front end is that the operation returns {code:0} if the operation succeeds, otherwise {code:-1} is returned

Define the interface as follows:

package com.xuecheng.media.api;
/**
 * @description 大文件上传接口
 */
@Api(value = "大文件上传接口", tags = "大文件上传接口")
@RestController
public class BigFilesController {



    @ApiOperation(value = "文件上传前检查文件")
    @PostMapping("/upload/checkfile")
    public RestResponse<Boolean> checkfile(
            @RequestParam("fileMd5") String fileMd5
    ) throws Exception {
        return null;
    }

//chunk是分块序号
    @ApiOperation(value = "分块文件上传前的检测")
    @PostMapping("/upload/checkchunk")
    public RestResponse<Boolean> checkchunk(@RequestParam("fileMd5") String fileMd5,
                                            @RequestParam("chunk") int chunk) throws Exception {
       return null;
    }

    @ApiOperation(value = "上传分块文件")
    @PostMapping("/upload/uploadchunk")
    public RestResponse uploadchunk(@RequestParam("file") MultipartFile file,
                                    @RequestParam("fileMd5") String fileMd5,
                                    @RequestParam("chunk") int chunk) throws Exception {

        return null;
    }

    @ApiOperation(value = "合并文件")
    @PostMapping("/upload/mergechunks")
    public RestResponse mergechunks(@RequestParam("fileMd5") String fileMd5,
                                    @RequestParam("fileName") String fileName,
                                    @RequestParam("chunkTotal") int chunkTotal) throws Exception {
        return null;

    }


}

5.4 Upload block service

5.4.1 Checking files and chunking

After the interface is completed, implement the interface. First, implement the check file method and check block method.

@Override
public RestResponse<Boolean> checkFile(String fileMd5) {
    //查询文件信息
    MediaFiles mediaFiles = mediaFilesMapper.selectById(fileMd5);
    if (mediaFiles != null) {
        //桶
        String bucket = mediaFiles.getBucket();
        //存储目录
        String filePath = mediaFiles.getFilePath();
        //文件流
        InputStream stream = null;
        try {
            stream = minioClient.getObject(
                    GetObjectArgs.builder()
                            .bucket(bucket)
                            .object(filePath)
                            .build());

            if (stream != null) {
                //文件已存在
                return RestResponse.success(true);
            }
        } catch (Exception e) {
           
        }
    }
    //文件不存在
    return RestResponse.success(false);
}



@Override
public RestResponse<Boolean> checkChunk(String fileMd5, int chunkIndex) {

    //得到分块文件目录
    String chunkFileFolderPath = getChunkFileFolderPath(fileMd5);
    //得到分块文件的路径
    String chunkFilePath = chunkFileFolderPath + chunkIndex;

    //文件流
    InputStream fileInputStream = null;
    try {
        fileInputStream = minioClient.getObject(
                GetObjectArgs.builder()
                        .bucket(bucket_videoFiles)
                        .object(chunkFilePath)
                        .build());

        if (fileInputStream != null) {
            //分块已存在
            return RestResponse.success(true);
        }
    } catch (Exception e) {
       
    }
    //分块未存在
    return RestResponse.success(false);
}

//得到分块文件的目录
private String getChunkFileFolderPath(String fileMd5) {
    return fileMd5.substring(0, 1) + "/" + fileMd5.substring(1, 2) + "/" + fileMd5 + "/" + "chunk" + "/";
}

5.4.2 Upload chunks

@Override
public RestResponse uploadChunk(String fileMd5, int chunk, String localChunkFilePath) {

    //得到分块文件的目录路径。“abcde”->“a/b/abcde”
    String chunkFileFolderPath = getChunkFileFolderPath(fileMd5);
    //得到分块文件的路径
    String chunkFilePath = chunkFileFolderPath + chunk;
    //获取文件类型mimeType
    String mimeType = getMimeType(null);
    //将文件存储至minIO
    boolean b = addMediaFilesToMinIO(localChunkFilePath, mimeType, bucket_videoFiles, chunkFilePath);
    if (!b) {
        log.debug("上传分块文件失败:{}", chunkFilePath);
        return RestResponse.validfail(false, "上传分块失败");
    }
    log.debug("上传分块文件成功:{}",chunkFilePath);
    return RestResponse.success(true);

}
    //根据扩展名获取mimeType
    private String getMimeType(String extension) {
        if (extension == null) {
            extension = "";
        }
        //根据扩展名取出mimeType
        ContentInfo extensionMatch = ContentInfoUtil.findExtensionMatch(extension);
        String mimeType = MediaType.APPLICATION_OCTET_STREAM_VALUE;//通用mimeType,字节流
        if (extensionMatch != null) {
            mimeType = extensionMatch.getMimeType();
        }
        return mimeType;

    }

5.4.3 Improve the interface layer

@ApiOperation(value = "文件上传前检查文件")
@PostMapping("/upload/checkfile")
public RestResponse<Boolean> checkfile(
        @RequestParam("fileMd5") String fileMd5
) throws Exception {
    return mediaFileService.checkFile(fileMd5);
}


@ApiOperation(value = "分块文件上传前的检测")
@PostMapping("/upload/checkchunk")
public RestResponse<Boolean> checkchunk(@RequestParam("fileMd5") String fileMd5,
                                        @RequestParam("chunk") int chunk) throws Exception {
    return mediaFileService.checkChunk(fileMd5,chunk);
}

@ApiOperation(value = "上传分块文件")
@PostMapping("/upload/uploadchunk")
public RestResponse uploadchunk(@RequestParam("file") MultipartFile file,
                                @RequestParam("fileMd5") String fileMd5,
                                @RequestParam("chunk") int chunk) throws Exception {

    //创建临时文件
    File tempFile = File.createTempFile("minio", "temp");
    //上传的文件拷贝到临时文件
    file.transferTo(tempFile);
    //文件路径
    String absolutePath = tempFile.getAbsolutePath();
    return mediaFileService.uploadChunk(fileMd5,chunk,absolutePath);
}

Start the front-end project, and enter the upload video interface for front-end and back-end joint debugging tests. 

Error reporting, Tomcat default upload file size limit is 1M, yml configuration file upload limit 

When the block merged by minio is less than 5M, an error will be reported:

solve:

The file block size of the front end is 5MB, and the default upload file size limit of SpringBoot web is 1MB. Here, the media-api project yml configuration in nacos is required as follows:

spring:
  servlet:
    multipart:
      max-file-size: 50MB
      max-request-size: 50MB

max-file-size: the size limit of a single file

Max-request-size: The size limit of a single request

5.5 Merging block development

5.5.1 service development

Business Process:

1. Obtain the path of the block file
2. Merge
3. Verify whether the md5 merged file is consistent with the source file, so as to judge whether the upload is successful
4. Enter the file information into the database
5. Clear the block file

Code: 

@Override
public RestResponse mergechunks(Long companyId, String fileMd5, int chunkTotal, UploadFileParamsDto uploadFileParamsDto) {
    //=====1.获取分块文件路径=====
    String chunkFileFolderPath = getChunkFileFolderPath(fileMd5);
    //组成将分块文件路径组成 List<ComposeSource>
    List<ComposeSource> sourceObjectList = Stream.iterate(0, i -> ++i)
            .limit(chunkTotal)
            .map(i -> ComposeSource.builder()
                    .bucket(bucket_videoFiles)
                    .object(chunkFileFolderPath.concat(Integer.toString(i)))
                    .build())
            .collect(Collectors.toList());
    //=====2.合并=====
    //文件名称
    String fileName = uploadFileParamsDto.getFilename();
    //文件扩展名
    String extName = fileName.substring(fileName.lastIndexOf("."));
    //合并文件路径
    String mergeFilePath = getFilePathByMd5(fileMd5, extName);
    try {
        //合并文件
        ObjectWriteResponse response = minioClient.composeObject(
                ComposeObjectArgs.builder()
                        .bucket(bucket_videoFiles)
                        .object(mergeFilePath)
                        .sources(sourceObjectList)
                        .build());
        log.debug("合并文件成功:{}",mergeFilePath);
    } catch (Exception e) {
        log.debug("合并文件失败,fileMd5:{},异常:{}",fileMd5,e.getMessage(),e);
        return RestResponse.validfail(false, "合并文件失败。");
    }

    // ====3.验证md5合并后的文件和源文件是否一致,从而判断是否上传成功====
    //下载合并后的文件
    File minioFile = downloadFileFromMinIO(bucket_videoFiles,mergeFilePath);
    if(minioFile == null){
        log.debug("下载合并后文件失败,mergeFilePath:{}",mergeFilePath);
        return RestResponse.validfail(false, "下载合并后文件失败。");
    }

    try (InputStream newFileInputStream = new FileInputStream(minioFile)) {
        //minio上文件的md5值
        String md5Hex = DigestUtils.md5Hex(newFileInputStream);
        //比较md5值,不一致则说明文件不完整
        if(!fileMd5.equals(md5Hex)){
            return RestResponse.validfail(false, "文件合并校验失败,最终上传失败。");
        }
        //文件大小
        uploadFileParamsDto.setFileSize(minioFile.length());
    }catch (Exception e){
        log.debug("校验文件失败,fileMd5:{},异常:{}",fileMd5,e.getMessage(),e);
        return RestResponse.validfail(false, "文件合并校验失败,最终上传失败。");
    }finally {
       if(minioFile!=null){
           minioFile.delete();
       }
    }

    //====4.文件信息入数据库。注入自己这个bean,加“currentProxy.”主要为了让组成事务。非事务方法调用事务方法必须用代理对象调用=====
//    @Autowired
//    MediaFileService currentProxy;
currentProxy.addMediaFilesToDb(companyId,fileMd5,uploadFileParamsDto,bucket_videoFiles,mergeFilePath);
    //=====5.清除分块文件=====
    clearChunkFiles(chunkFileFolderPath,chunkTotal);
    return RestResponse.success(true);
}

/**
 * 从minio下载文件
 * @param bucket 桶
 * @param objectName 对象名称
 * @return 下载后的文件
 */
public File downloadFileFromMinIO(String bucket,String objectName){
    //临时文件
    File minioFile = null;
    FileOutputStream outputStream = null;
    try{
        InputStream stream = minioClient.getObject(GetObjectArgs.builder()
                .bucket(bucket)
                .object(objectName)
                .build());
        //创建临时文件
        minioFile=File.createTempFile("minio", ".merge");
        outputStream = new FileOutputStream(minioFile);
        IOUtils.copy(stream,outputStream);
        return minioFile;
    } catch (Exception e) {
       e.printStackTrace();
    }finally {
        if(outputStream!=null){
            try {
                outputStream.close();
            } catch (IOException e) {
                e.printStackTrace();
            }
        }
    }
    return null;
}
/**
 * 得到合并后的文件的地址
 * @param fileMd5 文件id即md5值
 * @param fileExt 文件扩展名
 * @return
 */
private String getFilePathByMd5(String fileMd5,String fileExt){
    return   fileMd5.substring(0,1) + "/" + fileMd5.substring(1,2) + "/" + fileMd5 + "/" +fileMd5 +fileExt;
}

/**
 * 清除分块文件
 * @param chunkFileFolderPath 分块文件路径
 * @param chunkTotal 分块文件总数
 */
private void clearChunkFiles(String chunkFileFolderPath,int chunkTotal){

    try {
        List<DeleteObject> deleteObjects = Stream.iterate(0, i -> ++i)
                .limit(chunkTotal)
                .map(i -> new DeleteObject(chunkFileFolderPath.concat(Integer.toString(i))))
                .collect(Collectors.toList());

        RemoveObjectsArgs removeObjectsArgs = RemoveObjectsArgs.builder().bucket("video").objects(deleteObjects).build();
        Iterable<Result<DeleteError>> results = minioClient.removeObjects(removeObjectsArgs);
        results.forEach(r->{
            DeleteError deleteError = null;
            try {
                deleteError = r.get();
            } catch (Exception e) {
                e.printStackTrace();
                log.error("清楚分块文件失败,objectname:{}",deleteError.objectName(),e);
            }
        });
    } catch (Exception e) {
        e.printStackTrace();
        log.error("清楚分块文件失败,chunkFileFolderPath:{}",chunkFileFolderPath,e);
    }
}

Notice:

A non-transactional method must call a transactional method with a proxy object.

Therefore, when the file information is entered into the database, you should inject your own bean and add "currentProxy." instead of "this.", mainly for the purpose of forming transactions.

5.5.2 Improvement of interface layer

@ApiOperation(value = "合并文件")
@PostMapping("/upload/mergechunks")
public RestResponse mergechunks(@RequestParam("fileMd5") String fileMd5,
                                @RequestParam("fileName") String fileName,
                                @RequestParam("chunkTotal") int chunkTotal) throws Exception {
    Long companyId = 1232141425L;

    UploadFileParamsDto uploadFileParamsDto = new UploadFileParamsDto();
    uploadFileParamsDto.setFileType("001002");
    uploadFileParamsDto.setTags("课程视频");
    uploadFileParamsDto.setRemark("");
    uploadFileParamsDto.setFilename(fileName);

    return mediaFileService.mergechunks(companyId,fileMd5,chunkTotal,uploadFileParamsDto);

}

5.5.2 Merge block test

The following is the front-end and back-end joint debugging:

1. Upload a video to test the execution logic of merging blocks

Enter the service method to trace line by line.

2. Breakpoint resume test

After uploading a part, stop refreshing the browser and re-upload. Through the browser log, it is found that the uploaded parts will not be re-uploaded

Guess you like

Origin blog.csdn.net/qq_40991313/article/details/129760408