The front part uploading large files

demand:

Project to support large file upload function, after discussion, the initial size of the file upload control over 20G, and therefore they need to upload files and adjust the configuration section of the project, we will have to 20G to the size limit.

PC end of the whole platform support required to support Windows, Mac, Linux

Support all browsers.

Support Bulk upload file

Support the upload folder, and required to retain a hierarchical structure on the server. The number of folders required to support 10W.

Support for large files HTTP request to refresh your browser, restart the browser, restart the computer and still be able to continue to upload. File size requires the ability to support up to 20 G.

It supports automatic load a local file, requires the ability to automatically load the specified local file.

Support batch download files, asked not to pack the server. Because 20G file server packed in a long time.

Support folder download, asked not packaged in the server, downloaded to the local hierarchy after the required retention

File List panel supports navigation path, create a new folder

 

A large file upload basic description:

  Various WEB framework, requests the browser to upload files, have their own processing object is responsible for the contents of the agreement Http MultiPart parse, and for developers to call request form content.

such as:

Spring  use similar CommonsMultipartFile object processing information frame list binary file.

The .NET used HtmlInputFile / HttpPostedFile object processing binary file information.

Pros: built-in objects using the framework can easily handle MultiPart binary information request from the browser, protocol analysis operations without developer participation.

Disadvantages: receiving packets which is completely enclosed in the frame of the built-in objects, the information processing until this request (reception) is completed before allowing the transfer of the developer from the form and content of the document interface. Progress information upload process can not access, can not upload large files (such as large files more than a few megabytes of binary information).

Goal: We want to JAVA WEB framework, relying on the ability to Filter filter implementation does not rely on the framework built-in objects, protocol parsing MultiPart browser requests from the byte stream to obtain all the information the user request, including the multi-binary information and other information form items. Users upload file size will be unrestricted. And during transmission, we can get real-time information on the progress of the current transfer.

NOTE: .NET Framework IHttpModule interface object may rely on the ability to achieve the JAVA frame Filter, not described herein.

 Before counterfeit uploadify written version of a file upload HTML5 plug-ins, have not seen a friend Click here to look at ~ get a lot of friends at home, I also used in the project, whether user avatar upload, or each Upload kinds of media files, as well as a variety of personalized business needs can be met. A little happy.

  But no matter how flexible plug-in again, it is difficult to meet all the needs, for example, you want to upload a file to 2G. Now to our network speed, then I am afraid that soon have to pass a half-hour. The problem is, if you're uploading to 90% of the time does not accidentally close the browser or hand flick press the F5, finished, and all have to start all over again. This user experience is simply too bad. So, resuming it very necessary. What is the resume I will not explain, file with QQ pass for so many years, we have seen it.

  Here is that technical points which have HTTP. The use of traditional forms of submission or HTML5 FormData are the file "en bloc" to submit, to the file server and then get transferred, rename and other operations, and therefore, can not save files uploaded in real-time part. And in the http protocol, we can not maintain long connections the browser and the server can not be in the form of file streams to submit. So the question to be addressed specifically the following points:

To upload files split, a time to upload a small piece. Service termination after receipt of the documents appended to the original parts, and finally merge into a complete file.

Before each piece to upload files get uploaded file size, to determine the location of this should be cut

After each upload is complete update records have been uploaded file size

Identify the client and server files to ensure that the content is not appended to the file A file B

  In reference to the Zhangxin Xu brother of this article , the technology will I learn in my plug-Huploadify, the successful addition of HTTP functionality. In the technology and plug-ins to others.

 

Works / technical points

  First things first, to be clear, if we have a 10M file, each cutting upload 1M, you need to send 10 requests to complete. In the http protocol, you can only do so. Breakpoint upload to complete three steps:

After selecting a file, acquires the file size on the server, a custom function or be acquired by local storage.

According to the request submitted to the n-th cutting the size of the uploaded file, sent to keep the file server chip, the server continues to append file contents

When uploaded the file size reaches the total file size, uploading end 

  First, the file is divided, HTML5 added Blob data type, and provides a method of data may be split: slice (), and its usage string, as the array slice () method can intercept part of a binary file.

  Followed by saving the file with an additional piece of my background written in PHP, first obtain a binary format file with file_get_contents, then every time a file file_put_contents additional, specific wording can refer to later, or download my files packaged good.

  Next we also need real-time to save the uploaded file size, in order to correct before the next cut to upload. Use of HTML5 localStorage is a method that has been uploaded to the size of stored locally before uploading the next read start local. However, this approach is very limited, the user may set aside cleared through a variety of local data stewards do not speak, if user A page in a file uploaded by 50%, and then trying to upload the file to another place in the page B the results from the local file uploaded a reading of 50%, and 51% directly from the location to start uploading, apparently a mistake. The problem is not locally stored too much information, only to get the original name of the file through File API, not the right match with the correct file on the server. So the real with the project, had to rely on the server to store the data.

  About how there will be server-side data, it has been how to take front-end data, I will talk about below.

  Technical points on top of so many, in fact, there is not much technical content Ha ~ to see how my plug-ins to use it.

 

Use Resume function

  The introduction of the file is not talked about, one can refer to the introduction to plug-ins. The key point is to add a few configuration, first look:

breakPoints: false, // whether to open HTTP

fileSplitSize: 1024 * 1024, // file block size for HTTP, the unit Byte, default 1M

getUploadedSize: null, // type: function, custom function to get the size of uploaded files, open for HTTP mode, you can pass a parameter file, that is, the current uploaded file object, need to return to the type of number

saveUploadedSize: null, // type: function, custom save the file upload size function for opening HTTP mode, can be passed two parameters: file: current upload the file object, value: uploaded files size in Byte

saveInfoLocal: false, // used to enable HTTP mode, whether to use localStorage to store the uploaded file size

  This is the default configuration values ​​plug-in. To configure a resume function even five items, really terrible! Do not worry listen to me slowly come, the five not to occur simultaneously, in order to meet the complex business that may arise prepared.

  breakPoints is open for HTTP switch to be used, then set to true, the default is not open.

  fileSplitSize is the size of each cut sheet documents, the default is 1M, can be determined according to the actual situation. If your system is bigger uploaded files generally in more than 1G, you can configure.

  getUploadedSize是用来自定义获取已上传的文件大小的函数,还记得上面说过的localStorage的局限吧,所以我这里直接把获取文件大小的函数交给你来定义,你可以从session、cookie,从文件、数据库或者任何地方取,可以发送一个ajax请求到你想要的地址,传递你需要的参数。注意你定义的函数将来会被插件调用,所以一定要返回一个Number类型的结果。

  saveUploadedSize与getUploadedSize对应,你自己定义如何保存已上传文件的大小,只要你存的数据你自己能取到就OK。当然前提是你要注意到上面说过的localStorage的局限,确保你的逻辑正确能够操作到正确的文件。

  saveInfoLocal是当你使用localStorage保存数据时需要开启的一个开关。插件默认提供使用localStorage方式的支持。只要开启此选项就可以了。当然,这种情况下你的业务逻辑必须足够简单,比如只是做一个上传的demo,或者这系统的用户只有你一个人,你明白如何避开那些局限的地方。

  掌握了这五个配置的作用,你就可以实现一个足够灵活的断点上传功能了!在我打包好的文件里,提供了使用localStorage方式的demo,抱歉我无法将数据库表都发给你,所以只能用本地存储来演示。

 

在服务端保存数据

  用户在使用上传的时候可能有各种你意想不到的操作,这里我发挥想象描述一下用户可能的行为:

同一台机器使用不同帐号登录,上传同一个文件

文件上传了一部分,然后修改了文件内容,再次上传

文件上传完成100%,再次上传该文件

同一个页面有多个上传按钮,上传同一个文件,或在不同页面上传同一个文件

 

  仅仅上面四条,是不是情况就够复杂了?再加上你系统还有自己的业务逻辑,所以在服务端保存已上传文件数据是非常有必要的。而且保存数据和获取数据的函数都交给你来定义,抱着插件有足够的灵活性。

  因为涉及到了服务端的技术,无法演示,我将我项目中的真实使用场景在此讲解一下,来展示一下如何自已定义方法来实现服务端保存数据的可靠上传。我定义的getUploadedSize函数如下:

getUploadedSize:function(file){

            var data = {

                data : {

                    fileName : file.name,

                    lastModifiedDate : file.lastModifiedDate.getTime()

                }

            };

            var url = 'http://localhost/uploadfile/';

            var uploadedSize = 0;

            $.ajax({

                url : url,

                data : data,

                async : false,

                type : 'POST',

                success : function(returnData){

                    returnData = JSON.parse(returnData);

                    uploadedSize = returnData.uploadedSize;

                }

            });

            return uploadedSize;

        }

 

  我向后台的某个地址发送一个请求,传递文件名和文件的最后修改时间为参数,后台根据这两个参数来找到与前台所选择的文件对应的服务器上的文件,将服务器返回的文件大小return出去,来被插件使用。为什么要传递这两个参数呢?我们在前台无法知道服务器上的这个文件的名称,所以使用原始文件名作为一个辅助标识。为了防止用户在两次上传间隔修改了文件,我们把文件的最后修改时间也传给服务端,让服务端进行比较,若时间不对应则返回已上传大小为0,重新上传此文件。

  再来看后台都要做哪些工作。数据库中需要有一张表来记录每个已文件的情况,包含的字段大致有:

字段

描述

client_filename

文件在客户端的原始名称

server_filename

文件在服务器上重命名后的名称

last_modified_date

文件的最后修改时间,时间戳

status

文件的状态,已完成、未完成

uploaded_size

已上传文件的大小

  根据client_filename和last_modified_date,再加上系统中的其他关联信息,可以定位到本次上传的文件在服务端的大小,然后返回给客户端。当然这是我自己的用法,你也可以根据自己的需求灵活设计。总之最终的目的就是要找到前台选择的文件在服务器上真正对应的文件,并将已上传大小正确返回。

  另外需注意的一点,就是在续传的第二步,不断提交文件片的过程中,也需要服务端准确定位到相应的文件,不能把A的数据追加到B上。采用的方式也是提交fileName和lastModifyDate两个参数(已写在插件内部,可服务端直接获取),服务端找到对应的文件进行追加。

  另外再啰嗦一句,后台获取文件的时候需要取成二进制的,而我们提交是使用FormData来提交的,所以PHP代码需要这么写:

file_put_contents('uploads/'.$filename,file_get_contents($_FILES["file"]["tmp_name"]),FILE_APPEND);

  如果上面的说明还是不够清楚,就需要你自己来探索一下了,毕竟考虑到插件可能应用在复杂的系统中,很多工作还是需要你来做的。或者你也可以给我留言,我很乐意为你解答疑惑。

该版本的其他改动

  从1.0到2.0,Huploadify又新加了很多东西,不过只是新加,使用方式跟之前的没有变化。例如上面的断点续传功能,你如果不想使用,只需设置breakPoints为false即可,插件仍按照以前的方式工作。除了断点续传这个大头,插件还做了如下改动:

增加了onSelect回调函数,在选择了文件之后触发,用法与uploadify官网的一致

删除掉正在上传的文件,中断发送请求

完善了input file组件的accept属性支持,浏览时只显示运行的文件格式,就是这个东东:

 

  4. 对外开放了方法调用接口,upload、stop、cancel、disable、ennable。我在demo中有演示。使用方法如下:

var up = $('#upload').Huploadify({

    auto:false,

    fileTypeExts:'*.jpg;*.png;*.exe;*.mp3;*.mp4;*.zip;*.doc;*.docx;*.ppt;*.pptx;*.xls;*.xlsx;*.pdf',

    multi:true

});

up.upload(1);//开始上传文件,接收一个参数,表示上传第几个文件,可传入*上传队列中的所有文件

up.stop();//暂停上传队列中的所有文件,不接收参数。用于开启了断点需传

up.cancel(1);//删除队列中的某个文件,接收一个参数,表示删除第几个文件,可传入*删除队列中的所有文件

up.disable();//使选择文件按钮失效,不接收参数

up.ennable();//使选择文件按钮生效,不接收参数  5. 修改其他已知bug

结束

  我在demo中使用了本地存储来做已上传文件大小的保存,下载压缩包后可看一下效果。上传一个比较大的视频文件,上传到中间关闭浏览器,再次打开浏览器上传同一个文件,会看到从上次断掉的地方继续上传。

详细内容可以参考我写的这篇文章:http://blog.ncmem.com/wordpress/2019/08/09/%e5%a4%a7%e6%96%87%e4%bb%b6%e4%b8%8a%e4%bc%a0%e8%a7%a3%e5%86%b3%e6%96%b9%e6%a1%88/

 


Guess you like

Origin www.cnblogs.com/songsu/p/11887388.html