1. Prepare the dataset and network code
1. Dataset
①Please put the dataset in a folder named dataset; ②Please compress the dataset, with the suffix .zip; ③Click OK
2. Code
①Please put the code in the folder named coad; ②Please compress the code, and the suffix is .zip; ③Click OK
Below we have two tarballs.
2. Use the AutoDL server
1. AutoDL address
https://www.autodl.com/register?code=e0ab7117-bd25-4480-8184-5953048a2502
2. Open AutoDL (registration does not explain)
①Open the interface.
② Registered users will have a 10 yuan voucher, which is enough for a period of time.
③Choose the GPU you want to use. For the sake of explanation, choose the most affordable GPU—TITAN Xp, and click 1 card to rent it.
④The following interface will appear, click on the base image.
⑤ Choose the deep learning framework you need, I choose PyTorch --> 1.10.0 --> 3.8 (ubuntu20.04) --> 11.3.
⑥ Click Create Now.
⑦Click on JupyterLab
⑧Wait for a while, click Terminal.
3. Using AutoDL
①Drag the data set and code into AutoDL.
②Click upload, the data set may wait for a while, please wait patiently.
③When the progress bar is full, the upload is successful.
④Download the installation tool.
Enter the command: curl -L -o /usr/bin/arc http://autodl-public.ks3-cn-beijing.ksyun.com/tool/arc && chmod +x /usr/bin/arc
⑤ Unzip.
Enter the command to decompress the code: arc decompress coad.zip
Enter the command to decompress the dataset: arc decompress dataset.zip
3. Training model
1. Switch to the coad folder
Command: cd coad
will switch to the coad folder.
2. Training model
Command: python model.py
3. Path
Be sure to pay attention to the path of the dataset.
You can install the path of the dataset in train.py to place the dataset.
4. shutdown
5. Compression
If you want to compress all the files in the current directory, you can use the wildcard *, that is, the command is zip -r compressed package name *. You can also use *.txt to compress all files with the specified .txt suffix.
zip -r archive name *