Table of contents
2. Install CUDA, cuDNN and PyTorch
3. Verify whether the installation is successful
I. Introduction
When training a deep learning model, you can use CPU training, but it is usually slow, and you can also use GPU for accelerated training, thereby shortening the training time.
Currently, the only graphics card that supports deep learning is NIVDIA, and AMD does not support it. Therefore, users of AMD graphics cards no longer need to worry about the installation of CUDA, and just install the CPU version of PyTorch directly.
To use GPU for accelerated training, three things need to be installed: CUDA, cuDNN, PyTorch . Everyone knows that PyTorch is an open source library for deep learning. Of course, Tensorflow can also be used here, depending on personal preference.
CUDA and cuDNN may be confused at first. Generally speaking, CUDA is a tool library with powerful computing capabilities; cuDNN is a configuration of the tool library so that the model can be understood by the tool library, thereby calling the tool library Calculation.
Even if it is an NVIDIA graphics card, it is necessary to determine whether your GPU (graphics card) supports CUDA. There are two ways to check:
- Enter the Nvidia Control Panel (NVIDIA Control Panel), Help -> System Information (I) -> Components -> 3D Settings -> NVCUDA64.DLL in the component corresponds to the CUDA version, as shown in the figure below:
- The second method is suitable for those who have installed CUDA before and want to determine the version. Enter the command line (cmd) and execute the following command to display the version of the native CUDA driver
nvcc -V
Note: There is another command to check the CUDA version with the command line: nvidia-smi , the results may be different after the two executions, because nvidia-smi shows the CUDA version installed by the way when the graphics card driver is installed, and nvcc -V The display shows CUDA installed through CUDA Toolkit. For using GPU to train DL models, we refer to the display of nvcc -V .
2. Install CUDA, cuDNN and PyTorch
Because both CUDA and cuDNN have to adapt to PyTorch (bushi), first open the PyTorch installation website: PyTorch
After selecting according to the above picture, you can see in the Compute Platform column that the PyTorch official website currently only supports CUDA11.7 and 11.8, and my machine is currently CUDA12.0, so it needs to be downgraded. If not installed, install 11.7 or 11.8
Note: PyTorch can be backward compatible with CUDA, that is, the PyTorch version can be higher, but the CUDA version cannot be higher
To install CUDA on Windows, you first need to download two installation packages
- CUDA toolkit (toolkit means toolkit) CUDA Toolkit 12.1 Update 1 Downloads | NVIDIA Developer
- cuDNN (for configuring deep learning usage)
CUDA installation
It should be noted that entering the official website defaults to downloading the latest version (12.1), and selecting the previous version (Archive of Previous CUDA Releases):
I choose to install version 11.8.0 here:
Choose your own system version and installation method. The installation methods "local" and "network" are 3.0GB and 29MB respectively. Personal understanding is that one is a complete installation package and the other is a simple installation package. You can pre-download it when you use it. Here, choose local The installation method, after the download is complete, you can install it all the way next. After the installation is complete, enter the environment variables. The following variables are successful (the installation address is customized by me):
If you do not select a path when installing CUDA, and choose easy installation, the default path is as follows:
C:\Program Files\NVIDIA GPU Computing Toolkit
If the above environment variables appear, it means that CUDA is installed successfully. At this time, enter the command line (cmd), execute the command nvcc -V , and you can see the cuda version:
cuDNN installation
The first step is completed, the next step is the installation of cuDNN, https://developer.nvidia.com/rdp/cudnn-download , here to download cuDNN you need to register an Nvidia account, you may need to surf the Internet scientifically, and then choose the CUDA you just installed version corresponding to cuDNN
Note: CUDA and cuDNN versions must correspond, there is no mutual compatibility, etc., must correspond!
The CUDA version is 11.8 just now, so cuDNN chooses "for CUDA 11.x". The installation method of cuDNN is a compressed package, which can be decompressed directly. After decompression, the following content is obtained:
CUDA can be regarded as a tool, and cuDNN is the configuration of the tool, so copy the three folders bin+include+lib in the cuDNN folder into the CUDA installation directory just now , the default installation directory is C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8, I am in the CUDA folder of the F drive, depending on the location of your installation.
After the copy is complete, the installation of CUDA and cuDNN has been completed, and then test to see if the installation is successful.
3. Verify whether the installation is successful
Open the command line interface, cd into the directory where CUDA is installed, then enter the \extras\demo_suite directory, and execute two applications ( bandwidthTest.exe, deviceQuery.exe ) in this directory to test whether the installation is successful:
After the two exes are executed, they are as shown above. Reulst = Pass, which means that both CUDA and cuDNN are installed successfully. The next article will introduce the installation of PyTorch and common problem solving.
hope it is of help to you!