Concurrent upload of large files (front end)

Simple implementation of large file upload more detailed articles, record it.

The disadvantage is: there is no breakpoint continuous flow function.

How to implement: Make a hash for the entire file, and upload the file name hash when uploading each fragment. 

Every time the client uploads later, the file hash is taken to the backend for matching to see if it has been uploaded before. If so, the index of the fragment is returned, so that it knows which fragment was uploaded last time. 

The following optimization is required only for the power-off function: , used to calculate the hash, when the file is large

Points that can continue to be optimized (100G hours): Increase Web Worker threads to calculate separate slices

reference:

https://juejin.cn/post/7177045936298786872?searchId=20230906224201914FD2C71500E3C3F871#heading-6

Guess you like

Origin blog.csdn.net/weixin_43416349/article/details/132725405