Before using this tutorial, you have installed and configured python3 or above by default
1. Go to the official website to download the matching Cuda
Cuda download address
The current highest version of Cuda is 12.1,
which is the version I installed
Tip: Custom installation can only choose to install Cuda Runtime. It is not necessary to install all the Nvidia family buckets. After installing all the family buckets, the direct system disk occupies 6G, which is very large.
2. Install pytorch
I used pip install torch (version 2.0.0) before
. The installed torch runs directly on the CPU. If
you want to use the GPU version, you need to use the corresponding cuda version.
Although only 11.8 Cuda support is currently given on the pytorch official website, the community has clearly indicated that it is compatible with higher versions of Cuda.
The picture above lists my local torch-related libraries.
It can be seen that this is a chaotic composition (a mess),
torch is the cpu version,
torchaudio is the gpu version,
torchvision is the cpu version
The method you can use torch.cuda.is_available()
to check whether your torch is the GPU version
So I decided to uninstall all of them directly and reinstall the GPU version of the torch family bucket
pip uninstall torch torchvision torchaudio
Then perform the installation
This is the installation method of the latest version at the time of writing this article
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
3. Verify that the installation is successful
Use pip list
to check whether the installed version is correct
Execute torch.cuda.is_available()
and return True to indicate that the GPU version is already in use
4. (Extended) install transformer
My previous version was 4.27.2
But when I started huggingface's GLM project, I encountered an error of dynamic module loading.
Someone has already raised an issue to the official
It is recommended to use version 4.26.1, which is relatively stable.