Only the backend code is written here. The basic idea is that the frontend will fragment the file, and then each time you access the upload interface, you will pass parameters to the backend: the current number of files, and the total number of fragments
Let's paste the code directly below. Most of the difficult ones are commented:
Upload file entity class:
It can be seen that there are already many functions we need in the entity class, as well as practical attributes. Such as the information transmitted in MD5 seconds.
public class FileInf {
public FileInf(){}
public String id="";
public String pid="";
public String pidRoot="";
/ ** * Indicates whether the current item is a folder item. * /
public boolean fdTask=false;
// /// Is it a sub-file in the folder /// </ summary>
public boolean fdChild=false;
/ ** * User ID. Integrate with third-party systems. * /
public int uid=0;
/ ** * The name of the file on the local computer * /
public String nameLoc="";
/ ** * The name of the file in the server. * /
public String nameSvr="";
/ ** * The full path of the file on the local computer. Example: D: \ Soft \ QQ2012.exe * /
public String pathLoc="";
/ ** * The full path of the file in the server. Example: F: \\ ftp \\ uer \\ md5.exe * /
public String pathSvr="";
/ ** * The relative path of the file in the server. Example: /www/web/upload/md5.exe * /
public String pathRel="";
/ ** * File MD5 * /
public String md5="";
/ ** * Digitized file length. In bytes, example: 120125 * /
public long lenLoc=0;
/ ** * Formatted file size. Example: 10.03MB * /
public String sizeLoc="";
/ ** * Location of file resuming. * /
public long offset=0;
/ ** * Uploaded size. In bytes * /
public long lenSvr=0;
/ ** * Uploaded percentage. Example: 10% * /
public String perSvr="0%";
public boolean complete=false;
public Date PostedTime = new Date();
public boolean deleted=false;
/ ** * Whether it has been scanned or not, it is provided for use in large folders, and the scan starts after the large folders are uploaded. * /
public boolean scaned=false;
}
The first is the file data receiving logic, which is responsible for receiving the file block data uploaded by the control and then writing it to the server file. The control has provided the index, size, MD5 and length information of the block. We can handle it flexibly according to our needs, and we can also save the data of the file block to the distributed storage system.
<%
out.clear();
String uid = request.getHeader("uid");//
String id = request.getHeader("id");
String lenSvr = request.getHeader("lenSvr");
String lenLoc = request.getHeader("lenLoc");
String blockOffset = request.getHeader("blockOffset");
String blockSize = request.getHeader("blockSize");
String blockIndex = request.getHeader("blockIndex");
String blockMd5 = request.getHeader("blockMd5");
String complete = request.getHeader("complete");
String pathSvr = "";
// The parameter is empty
if( StringUtils.isBlank( uid )
|| StringUtils.isBlank( id )
|| StringUtils.isBlank( blockOffset ))
{
XDebug.Output("param is null");
return;
}
// Check that we have a file upload request
boolean isMultipart = ServletFileUpload.isMultipartContent(request);
FileItemFactory factory = new DiskFileItemFactory();
ServletFileUpload upload = new ServletFileUpload(factory);
List files = null;
try
{
files = upload.parseRequest(request);
}
catch (FileUploadException e)
{// Parsing file data error
out.println("read file data error:" + e.toString());
return;
}
FileItem rangeFile = null;
// Get all uploaded files
Iterator fileItr = files.iterator();
// Loop through all files
while (fileItr.hasNext())
{
// get the current file
rangeFile = (FileItem) fileItr.next();
if(StringUtils.equals( rangeFile.getFieldName(),"pathSvr"))
{
pathSvr = rangeFile.getString();
pathSvr = PathTool.url_decode(pathSvr);
}
}
boolean verify = false;
String msg = "";
String md5Svr = "";
long blockSizeSvr = rangeFile.getSize();
if(!StringUtils.isBlank(blockMd5))
{
md5Svr = Md5Tool.fileToMD5(rangeFile.getInputStream());
}
verify = Integer.parseInt(blockSize) == blockSizeSvr;
if(!verify)
{
msg = "block size error sizeSvr:" + blockSizeSvr + "sizeLoc:" + blockSize;
}
if(verify && !StringUtils.isBlank(blockMd5))
{
verify = md5Svr.equals(blockMd5);
if(!verify) msg = "block md5 error";
}
if(verify)
{
// Save file block data
FileBlockWriter res = new FileBlockWriter();
// Only the first block is created
if( Integer.parseInt(blockIndex)==1) res.CreateFile(pathSvr,Long.parseLong(lenLoc));
res.write( Long.parseLong(blockOffset),pathSvr,rangeFile);
up6_biz_event.file_post_block(id,Integer.parseInt(blockIndex));
JSONObject o = new JSONObject();
o.put("msg", "ok");
o.put("md5", md5Svr);
o.put ("offset", blockOffset); // File-based block offset position
msg = o.toString();
}
rangeFile.delete();
out.write(msg);
%>
File initialization section
<%
out.clear();
WebBase web = new WebBase(pageContext);
String id = web.queryString("id");
String md5 = web.queryString("md5");
String uid = web.queryString("uid");
String lenLoc = web.queryString ("lenLoc"); // The digitized file size. 12021
String sizeLoc = web.queryString ("sizeLoc"); // The formatted file size. 10MB
String callback = web.queryString("callback");
String pathLoc = web.queryString("pathLoc");
pathLoc = PathTool.url_decode(pathLoc);
// The parameter is empty
if ( StringUtils.isBlank(md5)
&& StringUtils.isBlank(uid)
&& StringUtils.isBlank(sizeLoc))
{
out.write(callback + "({\"value\":null})");
return;
}
FileInf fileSvr= new FileInf();
fileSvr.id = id;
fileSvr.fdChild = false;
fileSvr.uid = Integer.parseInt(uid);
fileSvr.nameLoc = PathTool.getName(pathLoc);
fileSvr.pathLoc = pathLoc;
fileSvr.lenLoc = Long.parseLong(lenLoc);
fileSvr.sizeLoc = sizeLoc;
fileSvr.deleted = false;
fileSvr.md5 = md5;
fileSvr.nameSvr = fileSvr.nameLoc;
// All single files are stored in uuid / file
PathBuilderUuid pb = new PathBuilderUuid();
fileSvr.pathSvr = pb.genFile(fileSvr.uid,fileSvr);
fileSvr.pathSvr = fileSvr.pathSvr.replace("\\","/");
DBConfig cfg = new DBConfig();
DBFile db = cfg.db();
FileInf fileExist = new FileInf();
boolean exist = db.exist_file(md5,fileExist);
// The same file already exists in the database, and there is upload progress, then use this information directly
if(exist && fileExist.lenSvr > 1)
{
fileSvr.nameSvr = fileExist.nameSvr;
fileSvr.pathSvr = fileExist.pathSvr;
fileSvr.perSvr = fileExist.perSvr;
fileSvr.lenSvr = fileExist.lenSvr;
fileSvr.complete = fileExist.complete;
db.Add(fileSvr);
//trigger event
up6_biz_event.file_create_same(fileSvr);
} // This file does not exist
else
{
db.Add(fileSvr);
//trigger event
up6_biz_event.file_create(fileSvr);
FileBlockWriter fr = new FileBlockWriter();
fr.CreateFile(fileSvr.pathSvr,fileSvr.lenLoc);
}
Gson gson = new Gson ();
String json = gson.toJson (fileSvr);
json = URLEncoder.encode (json, "UTF-8"); // Encoding, to prevent Chinese garbled
json = json.replace("+","%20");
json = callback + "({\" value \ ": \" "+ json +" \ "})"; // return jsonp format data.
out.write(json);%>
The first step: get RandomAccessFile, random access file class object
The second step: call the getChannel () method of RandomAccessFile to open the file channel FileChannel. This logic can be optimized. If there is a need for distributed storage in the future, it can be changed to distributed storage to reduce the pressure on a single server.
public class FileBlockWriter {
public FileBlockWriter(){}
public void CreateFile(String pathSvr,long lenLoc)
{
try
{
File ps = new File(pathSvr);
PathTool.createDirectory(ps.getParent());
RandomAccessFile raf = new RandomAccessFile(pathSvr, "rw");
raf.setLength (lenLoc); // fix: create the file in its original size
raf.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace ();
}
}
public void write(long offset,String pathSvr,FileItem block)
{
try
{
InputStream stream = block.getInputStream();
byte[] data = new byte[(int)block.getSize()];
stream.read(data);
stream.close();
RandomAccessFile raf = new RandomAccessFile(pathSvr,"rw");
raf.seek(offset);
raf.write(data);
raf.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace ();
}
}
}
Step 3: Get the current number of blocks and calculate the last offset of the file
Step 4: Get the byte array of the current file block, used to get the file byte length
Step 5: Use the map () method of the FileChannel class File Channel to create a direct byte buffer MappedByteBuffer
Step 6: Put the block byte array into the buffer at the current position mappedByteBuffer.put (byte [] b);
Step 7: Free the buffer
Step 8: Check whether the file is completely uploaded
Folder scanning
Storage path generation class
Well, this is all over. If you have any questions or criticisms, comments and private messages are welcome. We will grow together and learn together.
Finally, put an effect picture
The back-end code logic is mostly the same, and currently supports MySQL, Oracle, SQL. Before using, you need to configure the database, you can refer to this article I wrote: http://blog.ncmem.com/wordpress/2019/08/07/java huge file upload and download /
Welcome to join the group to discuss: 374992201