《带你学AI·深度学习环境配置》Ubuntu18.04一步步安装CUDA、Python、Pytorch&TensorFlow&MXNet

本文提供一个极简一步步安装CUDA&Python&Pytorch示例,此示例基于Ubuntu18.04,Ubuntu16及Windows10可以参考此示例做相应修改。

准备

示例环境:

Ubuntu18.04

需要安装的工具有:

  1. 英伟达驱动(连接GPU与主机)
  2. CudaToolKit(GPU加速依赖)
  3. Miniconda(安装Python与管理Python环境)
  4. Pytorch、TensorFlow与MXNet的GPU版本安装

操作

安装英伟达驱动

Ubuntu18.04:Bash 运行

sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub

sudo apt-get update

wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb

sudo apt install ./nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb

sudo apt-get update

sudo apt-get install --no-install-recommends nvidia-driver-450
# Reboot. Check that GPUs are visible using the command: nvidia-smi
复制代码

Ubuntu16.04:Bash 运行

sudo apt-get install gnupg-curl

sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub

sudo apt-get update

wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64/nvidia-machine-learning-repo-ubuntu1604_1.0.0-1_amd64.deb
sudo apt install ./nvidia-machine-learning-repo-ubuntu1604_1.0.0-1_amd64.deb
sudo apt-get update


sudo apt-get install --no-install-recommends nvidia-418
# Reboot. Check that GPUs are visible using the command: nvidia-smi
复制代码

然后重启系统,运行nvidia-smi看是否正确运行出如下界面:下面只是示例,应该NVIDIA-SMI 后版本与上面安装的一致(nvidia-driver-450)

 

安装CudaToolKit

以cuda11.0,Ubuntu18.04举例

1.百度搜索:cuda11.0

点击第一个链接进入(如果无法打开请科学上网解决):CUDA Toolkit 11.0 Download | NVIDIA Developer

Ubuntu16.04 以此选择:Linux->x86_64->Ubuntu->16.04->deb(local)

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-ubuntu1604.pin
sudo mv cuda-ubuntu1604.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget http://developer.download.nvidia.com/compute/cuda/11.0.2/local_installers/cuda-repo-ubuntu1604-11-0-local_11.0.2-450.51.05-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1604-11-0-local_11.0.2-450.51.05-1_amd64.deb
sudo apt-key add /var/cuda-repo-ubuntu1604-11-0-local/7fa2af80.pub
sudo apt-get update
sudo apt-get -y install cuda
复制代码

Ubuntu18.04 以此选择:Linux->x86_64->Ubuntu->18.04->deb(local),如图所示,复制Installation Instructions使用Bash执行:

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget http://developer.download.nvidia.com/compute/cuda/11.0.2/local_installers/cuda-repo-ubuntu1804-11-0-local_11.0.2-450.51.05-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1804-11-0-local_11.0.2-450.51.05-1_amd64.deb
sudo apt-key add /var/cuda-repo-ubuntu1804-11-0-local/7fa2af80.pub
sudo apt-get update
sudo apt-get -y install cuda
复制代码

小宋说:这一步主要会出问题的是第三步,这一步是下载完整安装包,2GB左右。熟悉Bash指令小伙伴可以知道其实这是下载http://developer.download.nvidia.com/compute/cuda/11.0.2/local_installers/cuda-repo-ubuntu1804-11-0-local_11.0.2-450.51.05-1_amd64.deb这个链接文件的指令。这一步可以使用迅雷等工具复制链接加速下载,后复制到bash运行路径下(第三步就不用再bash下执行了),后执行第四步安装。

 

Miniconda安装

建议使用清华源,地址:mirrors.tuna.tsinghua.edu.cn/anaconda/mi…

网页翻到最后,下载对应系统下最新版即可(注意选择x86_64后缀):Miniconda3-py38_4.9.2-Linux-x86_64.sh

 下载后使用在bash下执行一下(注意文件所在路径):

bash Miniconda3-py38_4.9.2-Linux-x86_64.sh
复制代码

遇到选项属于Yes,其他默认即可。安装完成后新建一个Terminal,然后conda才会生效。

Pytorch与TensorFlow的GPU版本安装

参照:『技术随手学』pip conda 替换清华源 Windows与Ubuntu通用

1.首先对conda与pip换国内源用以加速,bash执行

#pip
pip install pip -U
pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple

#conda
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
conda config --append channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/fastai/
conda config --append channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/
conda config --append channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/bioconda/
 

conda config --set show_channel_urls yes
复制代码

如果以下步骤遇到CondaHTTPError: HTTP 000 CONNECTION FAILED问题,可以参考这个博客解决:

『技术随手学』解决windows与ubuntu平台 CondaHTTPError: HTTP 000 CONNECTION FAILED 问题

2.使用conda创建一个深度学习Python环境,bash执行:

conda create -n dl_py37 python=3.7
复制代码

后激活Python环境,注意如果执行后 (base) root@b9fc5be9c7f1:~#  -> (dl_py37) root@b9fc5be9c7f1:~#

conda activate dl_py37
复制代码

3.安装Pytoch1.7,bash执行(建议cudatoolkit使用10.1用来支持TensorFlow2.3):

conda install pytorch torchvision torchaudio cudatoolkit=10.1
复制代码

4.安装TensorFlow,bash执行(注意tensorflow==2.3,是==,tensorflow2.3已经默认支持GPU,所以不用指定):

pip install tensorflow==2.3
复制代码

5.安装MXNet,bash执行(注意cu101代表是cudatoolkit=10.1):

pip install mxnet-cu101==1.7
复制代码

6.测试Pytorch、TensorFlow与MXNet

参照:『AI实践学』测试深度学习框架GPU版本是否正确安装方法:TensorFlow,PyTorch,MXNet,PaddlePaddle

1)TensorFlow

TensorFlow1.x与TensorFlow2.x测试方法是一样的,代码如下:

import tensorflow as tf

print(tf.test.is_gpu_available())
复制代码

上述代码保存为.py文件,使用需要测试环境即可运行,输出:上面是一下log信息,关键的是的最后True,表示测试成功

2020-09-28 15:43:03.197710: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2020-09-28 15:43:03.204525: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2020-09-28 15:43:03.232432: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: GeForce RTX 2070 with Max-Q Design major: 7 minor: 5 memoryClockRate(GHz): 1.125
pciBusID: 0000:01:00.0
2020-09-28 15:43:03.235352: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
2020-09-28 15:43:03.242823: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
2020-09-28 15:43:03.261932: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_100.dll
2020-09-28 15:43:03.268757: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_100.dll
2020-09-28 15:43:03.297478: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_100.dll
2020-09-28 15:43:03.315410: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_100.dll
2020-09-28 15:43:03.330562: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-09-28 15:43:03.332846: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-09-28 15:43:05.198465: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-09-28 15:43:05.200423: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0
2020-09-28 15:43:05.201540: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:   N
2020-09-28 15:43:05.203863: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/device:GPU:0 with 6306 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2070 with Max-Q Design, pci bus id: 0000:01:00.0, compute capability: 7.5)
True
复制代码

上面是一下log信息,关键的是的最后True,表示测试成功。其实我们还可以发现很多GPU信息

GPU型号:name: GeForce RTX 2070 with Max-Q Design

cuda版本:Successfully opened dynamic library cudart64_100.dll(10.0)

cudnn版本:Successfully opened dynamic library cudnn64_7.dll(7.x)

GPU数目:Adding visible gpu devices: 0(1)

GPU显存:/device:GPU:0 with 6306 MB memory(8G)

2)PyTorch

PyTorch与TensorFlow测试方法类似,都有GPU测试接口。PyTorch的GPU测试代码如下:

import torch

print(torch.cuda.is_available())
复制代码

上述代码保存为.py文件,使用需要测试环境即可运行,输出:True,表示测试成功

True
复制代码

可以看出PyTorch输出信息简洁很多。其实TensorFlow的log信息输出也是可以控制的。

3)MXNet

MXNet与PyTorch,TensorFlow测试方法不同,由于MXNet'没有GPU测试接口(或者说笔者没有找到)。所以MXNet的GPU测试代码采用try-catch捕捉异常的方法来测试,代码如下:

import mxnet as mx

mxgpu_ok = False

try:
    _ = mx.nd.array(1,ctx=mx.gpu(0))
    mxgpu_ok = True
except:
    mxgpu_ok = False

print(mxgpu_ok)
复制代码

上述代码保存为.py文件,使用需要测试环境即可运行,输出:True,表示测试成功

附录:如何卸载Nvidia_Driver与Cuda Toolkit

To remove NVIDIA Drivers:

sudo apt-get --purge remove "*nvidia*"
sudo apt autoremove
复制代码

To remove CUDA Toolkit:

sudo apt-get --purge remove "*cublas*" "cuda*"
sudo apt autoremove
复制代码

欢迎大家关注小宋公众号 《极简AI》 带你学深度学习:

​基于深度学习的理论学习与应用开发技术分享,笔者会经常分享深度学习干货内容,大家在学习或者应用深度学习时,遇到什么问题也可以与我在上面交流知无不答。

出自CSDN博客专家&知乎深度学习专栏作家@小宋是呢

猜你喜欢

转载自juejin.im/post/7000407074358165534