Web Folder Upload Solution

Recently, I met a need to upload very large files. I investigated the slicing and uploading functions of Qi Niu and Tencent Cloud, so I sorted out the implementation of related functions for large file uploads on the front end.

In some businesses, uploading large files is a relatively important interactive scenario, such as the Excel spreadsheet data with a large library and uploading audio and video files. If the file size is large, or the network conditions are not good, the upload time will be longer (more packets will be transmitted, and the probability of packet loss and retransmission is greater). The user cannot refresh the page and can only wait patiently for the request to complete .

Let's start with the file upload method, organize the idea of ​​uploading large files, and give the relevant example code. Because PHP has a convenient file splitting and splicing method built in, the server code uses PHP for example writing.

The sample code related to this article is located on github, the main reference

Talk about uploading large files

Large file cutting upload

Several ways to upload files

First, let's take a look at several ways to upload files.

Normal form upload

Using PHP to show regular form uploads is a good choice. First build the file upload form, and specify the submission type of the form as enctype = "multipart / form-data", indicating that the form needs to upload binary data.

Then write the index.php upload file receiving code, use the move_uploaded_file method (php Dafa is good ...)

When uploading large files in the form, it is easy to encounter the server timeout problem. Through xhr, the front end can also upload files asynchronously, generally by two ideas.

File encoding upload

The first idea is to encode the file and then decode it on the server side. I have written a blog on the front end to achieve image compression and upload. The main implementation principle is to convert the image to base64 for transmission.

varimgURL = URL.createObjectURL(file);

ctx.drawImage(imgURL, 0, 0);

// Get the encoding of the picture, then pass the picture as a long string

vardata= canvas.toDataURL( "image/jpeg", 0.5);

What needs to be done on the server side is relatively simple, first decode base64, and then save the picture

$imgData = $_REQUEST[ 'imgData'];

$base64 = explode( ',', $imgData)[ 1];

$img = base64_decode($base64);

$url = './test.jpg';

if(file_put_contents($url, $img)) {

exit(json_encode( array(

url => $ url

)));

}

The disadvantage of base64 encoding is that its volume is larger than the original picture (because Base64 converts three bytes into four bytes, so the encoded text will be about one third larger than the original text), which is very large For files, the time for uploading and parsing will increase significantly.

For more knowledge about base64, you can refer to the Base64 notes.

In addition to base64 encoding, you can also read the file content directly in the front end and upload it in binary format

// read binary file

functionreadBinary(text){

vardata = newArrayBuffer(text.length);

varui8a = newUint8Array (data, 0);

for( vari = 0; i < text.length; i++){

ui8a[i] = (text.charCodeAt(i) & 0xff);

}

console.log(ui8a)

}

varreader = newFileReader;

reader. = function{

readBinary (this.result) // Read result or upload directly

}

// Put the content of the file read from the input into the result field of fileReader

reader.readAsBinaryString(file);

formData asynchronous upload

The FormData object is mainly used to assemble a set of key / value pairs that send requests, and can send Ajax requests more flexibly. You can use FormData to simulate form submission.

letfiles = e.target.files // Get the input file object

letformData = newFormData;

formData.append( 'file', file);

axios.post (url, formData);

The server processing method is basically the same as the direct form request.

iframe no refresh page

On low-level browsers (such as IE), xhr does not support direct upload of formdata, so only form can be used to upload files, and form submission itself will cause page jumps, which is caused by the target attribute of the form form. , Whose values ​​are

_self, the default value, opens the response page in the same window

_blank, open in new window

_parent, open in the parent window

_top, open in the topmost window

framename, open in the iframe with the specified name

If you need to let users experience the feeling of uploading files asynchronously, you can specify iframe by framename. Set the target attribute of the form to an invisible iframe, then the returned data will be accepted by this iframe, so only this iframe will be refreshed. As for the returned result, it can also be obtained by parsing the text in this iframe.

functionupload {

varnow = + newDate

varid = 'frame'+ now

$( "body").append( `<iframe style="display:none;" name="${id}" id="${id}" />`);

var$form = $( "#myForm")

$form.attr({

"action": '/index.php',

"method": "post",

"enctype": "multipart/form-data",

"encoding": "multipart/form-data",

"target": id

}).submit

$( "#"+id).on( "load", function{

varcontent = $( this).contents.find( "body").text

try{

vardata = JSON.parse(content)

} catch(e){

console.log(e)

}

})

}

Large file upload

Now let's take a look at the timeout issues encountered when uploading large files in the above mentioned upload methods.

Form uploading and iframe upload without refreshing pages actually upload files through the form tag. In this way, the entire request is completely handed over to the browser. When uploading large files, you may encounter a request timeout.

Through fromData, it actually encapsulates a set of request parameters in xhr to simulate form requests, which cannot avoid the problem of large file upload timeout

Encoding upload, we can control the uploaded content more flexibly

The main problem with large file uploads is that in the same request, a large amount of data needs to be uploaded, resulting in a long process, and after failure, it needs to start uploading again. Imagine if we split this request into multiple requests, the time of each request will be shortened, and if a request fails, only need to resend this request, without starting from the beginning, so that it can solve large files What about the upload problem?

Based on the above problems, it seems that the upload of large files needs to meet the following requirements

Support split upload request (i.e. slice)

Support breakpoint resume

Support to display upload progress and pause upload

Next, let's implement these functions in sequence. It seems that the main function should be slicing.

File slicing

Reference: Large file cutting upload

In the uploading method of encoding, in the front end, we only need to obtain the binary content of the file, then split the content, and finally upload each slice to the server.

In Java, the file FIle object is a subclass of the Blob object. The Blob object contains an important method slice. Through this method, we can split the binary file.

The following is an example of a split file. For up6, developers do not need to care about the details of the split. The control helps to achieve it. Developers only need to care about business logic.

When the control is uploaded, relevant information will be added to each file block data, and the developer can process it by himself after receiving the data on the server side.

After receiving these slices, the server can splice them together. The following is the sample code of PHP splicing slices

For up6, developers do not need to splice, up6 has provided sample code, has implemented this logic.

To ensure uniqueness, the control will add information for each file block, such as block index, block MD5, file MD5

http

Up6 comes with a resume function. Up6 has saved the file information on the server and the file progress information on the client. The control will automatically load the file progress information when uploading, and the developer does not need to care about these details. In the processing logic of the file block, it only needs to be identified according to the file block index.

At this time, when you refresh the page or close the browser when uploading, and upload the same file again, the previously uploaded slices will not be uploaded again.

The logic for the server to implement the resuming of the breakpoint is basically similar, as long as the query interface of the server is called within getUploadSliceRecord to obtain the uploaded slice record, so it will not be expanded here.

In addition, the resume of the breakpoint also needs to consider the case of slice expiration: if the mkfile interface is called, the slice content on the disk can be cleared. If the client has not called the mkfile interface, letting these slices be kept on the disk is obviously Unreliable, under normal circumstances, the slice upload has a period of validity, if it exceeds the validity period, it will be cleared. For the above reasons, the breakpoint resume transmission must also synchronize the implementation logic of slice expiration.

Resume effect

 

Upload progress and pause

Through the progress method in xhr.upload, you can monitor the progress of each slice upload.

The implementation of upload pause is also relatively simple. You can use xhr.abort to cancel the upload of the currently uncompleted upload slices to achieve the effect of upload suspension. Resume uploading is similar to the resume of breakpoints. First obtain the uploaded slice list, and then resend Uploaded slice.

Due to space limitations, the upload progress and pause functions will not be implemented here.

To achieve the effect:

 

summary

At present, there are some mature large file upload solutions in the community, such as Qi Niu SDK, Tencent Cloud SDK, etc., perhaps we do not need to manually implement a simple large file upload library, but it is still necessary to understand the principle.

This article first sorts out several ways of uploading front-end files, and then discusses several scenarios for large file uploads and several functions that large file uploads need to implement

Split the file into slices through the slice method of the Blob object

Organized the conditions and parameters required to restore files on the server side, and demonstrated that PHP restores slices to files

Resume the breakpoint by saving the uploaded slice record

There are still some problems, such as: avoid memory overflow when merging files, slice invalidation strategy, upload progress pause and other functions, did not go deep or realize them one by one, continue to learn

The back-end code logic is mostly the same, and currently supports MySQL, Oracle, SQL. Before using, you need to configure the database, you can refer to this article I wrote: http://blog.ncmem.com/wordpress/2019/08/12/java-http%E5%A4%A7%E6%96%87 % E4% BB% B6% E6% 96% AD% E7% 82% B9% E7% BB% AD% E4% BC% A0% E4% B8% 8A% E4% BC% A0 / 
Welcome to join the group to discuss: 374992201 

Guess you like

Origin www.cnblogs.com/songsu/p/12719540.html