9. PyCharm+Docker: Create the most comfortable alchemy furnace for deep learning
Install docker:
How to Install Docker and Docker Compose in Ubuntu 22.04 LTS
https://zhuanlan.zhihu.com/p/547169542
Modify the Linux hard disk volume label:
ntfs file system: https://blog.csdn.net/nyist_yangguang/article/details/109958484
ex2, 3, 4 file system: https://cn.linux-console.net/?p=1185#gsc.tab=0
Import the old version of the Docker folder:
https://zhuanlan.zhihu.com/p/95533274
Start the demo container (docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi)
Error encountered: Unknown runtime specified nvidia
Because nvidia-docker is not installed,
Refer to the blog to install nvidia-docker :
Refer to the above method to import the old version of the Docker folder, and then try (sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi)
https://blog.csdn.net/weixin_44633882/article/details/115362059
For the Centos system, refer to these two articles:
Install docker
https://www.runoob.com/docker/centos-docker-install.html
Use the official installation script to automatically install
the installation command as follows:
curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
You can also use the domestic daocloud one-click installation command:
curl -sSL https://get.daocloud.io/docker | sh
Install nvidia-docker2
https://zhuanlan.zhihu.com/p/540669989
The specific steps are:
Setup the repository and the GPG key
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.repo | sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
Setup the repository and the GPG key
Install nvidia-docker2 (the network is not very good, it will be slightly slower)
sudo yum install -y nvidia-docker2
restart docker
sudo systemctl restart docker
Docker has created a container to modify the remote folder address:
https://blog.csdn.net/bf96163/article/details/108405502
PyCharm+Docker
https://zhuanlan.zhihu.com/p/52827335
sudo docker run --runtime=nvidia(--gpus all) --shm-size="8g" -p 4321:22 -p 4322:6006 -p 4323:80 --name="pytorch_1.8" -v /mount_disk/docker-v:/remote_workspace -it pytorch/pytorch:1.8... /bin/bash
Ten, docker image export and import
docker image export
First check the existing container to be tarred
docker ps -a
Next, use the commit parameter to save the image, -a name of the submitter -m "commit content", format such as: docker commit -a -m existing container ID name after saving: version number
docker commit -a "tmf" -m "tmf-web" 7740db56288a tmf-web:v20191123
Next, check to see if the mirror image appears:
docker images
Then the save parameter is packaged in the format such as: docker save -o to create a mirror image package name image
docker save -o tmf-web20191123.tar tmf-web:v20191123 注意保存的是镜像不是容器
docker export -o D:\containers\dockerdemocontainer.tar dockerimagename如果导入失败,可以尝试直接保存并导入容器
docker import dockerdemocontainer.tar imagename:version
tar file compression
https://segmentfault.com/a/1190000024498487
https://blog.csdn.net/capecape/article/details/78548723
#压缩
[root@localhost tmp]# gzip buodo
[root@localhost tmp]# ls
buodo.gz
#解压
[root@localhost tmp]# gunzip buodo.gz
[root@localhost tmp]# ls
buodo
scp file transfer:
$scp -P 端口号 传输文件的路径 用户名@主机:路径
如果要复制整个带路径的文件
则
$scp -P 端口号 -r 传输文件的路径 用户名@主机:路径
tar file splitting and merging
1. 打包压缩文件
tar -czf file.tar.gz filedir
2. 解压文件
tar -zxf file.tar.gz
3.分割大文件,每个文件最大100M
3.1)
split -b 100m file.tar.gz file.tar.gz.
3.2)后缀设为两位数字
split -a 2 -d -b 100m file.tar.gz file.tar.gz
4.合并文件
cat file.tar.gz.* > file.tar.gz
5. 打包压缩分割大文件
tar -czf - filedir | split -a 2 -d -b 100m - file.tar.gz
6. 合并解压文件
cat file.tar.gz.* | tar -zxf -
原文链接:https://blog.csdn.net/pan0755/article/details/51865877
docker image import
Then transfer the packaged package to another server for deployment or upgrade
First perform load to export the image operation load: import the image exported using the docker save command.
docker load -i tmf-web20191123.tar
Then check to see if there is a mirror image
docker images