Deep learning and computer vision practical learning (0) - basic tools, NVIDIA driver and CUDA installation

Install the basic dependency tool packages required by most development tools, such as git, atlas for matrix calculation, graphviz for graph visualization, etc.; and also install pip, Numpy and opencv.

Taking ubuntu18.04 as an example,

 

$ sudo apt install python-pip

$ sudo pip install numpy

$ sudo apt update

$ sudo apt install build-essential git libatlas-base-dev

$ sudo pip install graphviz

 

For all current deep learning frameworks, if you want to train a neural network, you cannot do without (NVIDIA's) GPU and the supporting GPU programming toolkit CUDA.

Before installing the NVIDIA driver, you need to uninstall the system's own driver:

$ sudo apt --purge remove xserver-xorg-video-nouveau

Then add the source of the NVIDIA driver:

$ sudo add-apt-repository ppa:graphics-drivers/ppa

Then install the driver and CUDA toolkit:

$ sudo apt install nvidia-361 nvidia-settings nvidia-prime

$ sudo apt install nvidia-cuda-toolkit

After the installation is complete, enter,

$nvidia-smi

If the installation is successful, the graphics card information will be displayed, as follows:

$ nvidia-smi
Mon Apr 12 11:35:06 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.141                Driver Version: 390.141                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 705      Off  | 00000000:01:00.0 N/A |                  N/A |
| 26%   42C    P8    N/A /  N/A |    729MiB /   959MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0                    Not Supported                                       |
+-----------------------------------------------------------------------------+
 

 

If it is another Linux system, the steps are very similar. First uninstall the built-in graphics card driver, then use the system's built-in software package or go to the NVIDIA official website to download the driver and CUDA and install it according to the instructions. The download address is http://www.nvidia .com/Download/index.aspx or https://developer.nvidia.com/cuda-downloads.

cuDNN is a library in CUDA specifically designed to accelerate deep neural networks and is an optional installation option. The download address is https://developer.nvidia.com/cudnn. Find the corresponding version and fill in the required information to download. After downloading, there is a compressed package. Here, we take cuDNN5.1 as an example. Execute the following command to decompress the cuDNN library and add it to the corresponding folder of CUDA.

$ tar -xvzf cudnn-8.0-linux-x64-v5.1-ga.tgz

$ sudo cp -P cuda/include/cndnn.h /usr/local/cuda/include

$ sudo cp - P cuda / lib64 / libcudnn * / usr / local / cuda / lib64

In any deep learning framework, the CPU-based matrix calculation package is also one of the basic libraries. In addition to the atlas installed at the beginning of this section, Intel's MKL (Math Kernel Library) is often a better option because of its excellent performance. The download address of MKL is https://software.intel.com/en-us/intel-mkl/.

MKL is free for individuals and requires certain registration steps to obtain a license. Its installation is not complicated. Download the installation package, unzip it, execute install.sh or install_GUI.sh, and follow the prompts to install it step by step.

 

 

Install MXNet,

MXNet’s github page, https://github.com/dmlc/mxnet

$ git clone --recursive https://github.com/dmlc/mxnet

After that, you will get an mxnet folder under the folder where the command is executed. The first step is to configure the basic options of the installation. Open the config.mk file in the mxnet/make folder. The following three options mainly need to be configured.

USE_CUDA=0

USE_CUDNN=0

USE_BLAS=highs

The above listed are the default options. For network training requirements, you need to change USE_CUDA to at least 1. If you need cuDNN and mkl, you need to change USE_CUDNN to 1 and Usel_BLAS to mkl.

After configuration, start the installation. There is a very convenient way under ubuntu. Go to the mxnet/setup-utils folder and execute:

$ cd mxnet/setup-utils

$ sh install-mxnet-ubuntu-python.sh

Wait for the execution to end and you're done.

The general method is to return to the mxnet directory and execute

$ cd mxnet

$ make -j

Automatically use all available CPU cores to compile the code. If you add a number directly after -j, you can specify the number of cores to use.

Then configure the Python interface,

$ cd python

$ sudo python setup.py install

 

Guess you like

Origin blog.csdn.net/Fan0920/article/details/115555601