JAVA WEB project file uploading and downloading large assemblies

The core principle:

 

The core project is the file block uploads. Front and rear end to high level of cooperation, the two sides agreed need better data in order to complete large file segment, we focus on in the project to solve the following problems.

* How fragmentation;

* How to synthesize a file;

* Interrupted which slice from the beginning.

How points, js library with powerful, to reduce our work, we have been able to have wheels on large file segment of the market, although programmers nature has forced me to re-create the wheel. But because the relationship between the relationship still work time, I can only give up the. Finally, I chose Baidu's WebUploader to achieve the desired front-end.

How close, before closing, we must first solve a problem, how do we distinguish the file block belongs. At the beginning, I was using a front-end to generate a unique uuid do sign documents, put on each slice request. But later when doing the second pass I gave up, using Md5 to maintain the block and file relationships.

The problem server merge files, and record blocking, and in this respect the industry has actually given a good solution. Referring Thunder, you will find that each download time, there will be two files, a file body, the other one is temporary files, temporary files stores the status of each block corresponding byte position.

These are the need to make close contact with front and rear ends, the front end of a fixed size need to file fragmentation, and fragmentation to bring the request number and size. After successful transmission request reaches the front end of the background, according to the server only needs to request data slice number and slice each block size (segment size is fixed and the same) is calculated start position, and the read segment data file, writing into the file.

In order to facilitate the development, I will end service business logic follows divided into initialization, block processing, and the like uploaded file.

Server-side business logic module follows

 

Functional Analysis:

Folder generating module

 

After the folder is uploaded by the server scan code is as follows

 

Upload block, processing logic block should be the most simple logic, UP6 file has a block, for each block and the identification data, the identification includes the index file block sizes, offsets, file the MD5, file block MD5 (open needed) and other information, the server can receive the information after the process is very convenient. For example the block data stored in the distributed storage system

 

Block upload can be said that the foundation of our entire project, like HTTP, suspend these are the need to use the block.

This block is relatively straightforward. It is the use of a front end webuploader, and other basic functions block has already been sealed, easy to use.

With webUpload give us the file API, the front end was extremely simple.

Front HTML template

 

It certainly points together. The large file fragmentation, but fragmentation of the original file is no function, so we need to synthesize original fragmentation of files. We just need to fragmentation written to the file by the original location to go. Because the principle of a front that we have talked about, we know the block size and block number, I can see the block starting position in the file. So here it is wise to use RandomAccessFile, RandomAccessFile able to move files back and forth inside. But in the vast majority of andomAccessFile function, has been the NIO JDK1.4 the "memory-mapped files (memory-mapped files)" replaced. I wrote were synthesized using RandomAccessFile and MappedByteBuffer file in the project. Methods and corresponds uploadFileRandomAccessFile uploadFileByMappedByteBuffer. Two following method code.

S transmission function

 

Server-side logic

Second transfer function, I believe we all embody over, when the network disk upload, find uploaded files second pass. In fact, the principle of slightly studied students should know, in fact, MD5 check file, upload it to record MD5 file system, first get the file contents MD5 MD5 values ​​or portions of values ​​before uploading a file, and then on the match system data.

Breakpoint-http achieve transfer principle seconds after the client select the file, click Upload when triggered obtain the file MD5 value, obtain the MD5 calling the system interface (/ index / checkFileMd5), querying the MD5 already exists (I project in redis used to store data, a document MD5 value to make key, value is stored in the address file.) returning to check the interface state, and then to the next step. I believe we will be able to look at the code to understand.

Ah, the front end of the MD5 value is used webuploader comes with features, this is a good tool.

Controls will trigger the end of the calculation file MD5 md5_complete events, and traditional values ​​md5, developers only need to deal with this event,

http

up6 automatically resuming has been processed, no further development are separate processing.

F_post.jsp receives these parameters and processed, developers need only focus on business logic, other aspects need not be concerned.

HTTP is that an interruption in the process of file uploads, human factors (pause) or force majeure (broken network or networks difference) leads to upload files to half failed. Then when environmental restoration, and re-upload the file, which is not a new start upload.

Front has also been mentioned, functions of HTTP is based on the block uploaded to achieve, to a large file into many small blocks, each server can be uploaded to a successful block will fall down, the client upload call Interface quickly verify a file start, conditions are selected to skip a block.

The principle is that before each file upload, file MD5 value on access to, call interface before uploading files (/ index / checkFileMd5, yes also pass the second test interfaces) If the file status acquisition is not completed, return all numbers not block the upload, and then the front end of the block for which the condition not upload sieve is calculated, and then upload.

When receiving the file after the file blocks may be written directly to the server

This is the upload file block effect

This is the folder after the upload results

This is the folder structure to store uploaded after the end of the service

Reference article: http://blog.ncmem.com/wordpress/2019/08/12/java-http%E5%A4%A7%E6%96%87%E4%BB%B6%E6%96%AD%E7 % 82% B9% E7% BB % AD% E4% BC% A0% E4% B8% 8A% E4% BC% A0 /

Welcome to the group to discuss: 374 992 201

Guess you like

Origin www.cnblogs.com/songsu/p/12665850.html