How to use GPU to run tensorflow 2.12 on Windows WSL

background

1. Install WSL on windows

2. Install miniconda on WSL.

3. Create a conda environment

4. Set up the GPU

5. Install tensorflow 2.12

6. Run your GPU Tensorflow 2.12 code in Pycharm


background

Starting from tensorflow 2.10, there is no corresponding version of tensorflow-gpu running on Window GPU. You can only call GPU by installing WSL2 on the window and running tensorflow in wsl2. Of course, you can also fall back to the old tensorflow-gpu version, but if you want to use the new tensorflow, you can only pass WSL2. This article is to show you how to call the GPU on the window to run tensorflow2.12.

If the version of tensorflow is before 2.10, you can refer to the blog .

1. Install WSL on windows

Developers can access the power of both Windows and Linux on a Windows computer. Windows Subsystem for Linux (WSL) allows developers to install Linux distributions (such as Ubuntu, OpenSUSE, Kali, Debian, Arch Linux, etc.) and use Linux applications, utilities, and Bash command line tools directly on Windows, without modification, without any modification. The overhead of traditional virtual machines or dual-boot setups.

prerequisite

You must be running Windows 10 version 2004 and later (Build 19041 and later) or Windows 11 to use these commands

Install WSL command


You can now install everything you need to run WSL with a single command. Right-click and select Run as Administrator to open PowerShell or Windows Command Prompt in administrator mode, enter the wsl --install command, and restart the computer.

wsl --install

If it cannot be executed, you can use wsl.exe --install

 Ok, let's continue after restarting.

Tip: The above command installs ubuntu by default. If you don’t want to use it, you can change it as shown in the picture below.

If you have the following problems, you can enter the BIOS and change it.

Then run the following command

 

Once you're done you'll find Ubuntu in the menu bar. Click to open.

Create username and password

 References:  https://learn.microsoft.com/en-us/windows/wsl/install

2. Install miniconda on WSL.

curl https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -o Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh

3. Create a conda environment

Create a virtual environment, my name is tf2.12

conda create --name tf2.12 python=3.9

 

Execute conda activate tf2.12 to enter the virtual environment.

4. Set up the GPU

Install nvidia gpu driver

Run the following command to verify whether you have installed nvidia gpu driver, if not, you can download and install it. https://www.nvidia.com/download/index.aspx?lang=en-us

  1. nvidia-smi

Install cuda toolkit 

conda install -c conda-forge cudatoolkit=11.8.0

Install cuDNN cull

pip install nvidia-cudnn-cu11==8.6.0.163

If it is slow, you can use the following command

pip install nvidia-cudnn-cu11==8.6.0.163 -i https://pypi.tuna.tsinghua.edu.cn/simple

 Configure the system path. After activating the conda environment, you can use the following command to do this every time you start a new terminal.

CUDNN_PATH=$(dirname $(python -c "import nvidia.cudnn;print(nvidia.cudnn.__file__)"))
export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/:$CUDNN_PATH/lib:$LD_LIBRARY_PATH

5. Install tensorflow 2.12

Install tensorflow 2.12 version

pip install tensorflow==2.12

Execute the following code to verify that tesnroflow can run on the GPU.

python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

If a list of GPU devices is returned, you have successfully installed TensorFlow.

If you want to run a python file in the window directory, you can add /mnt/ before the address of the directory

For example, the directory address you want is /c/apps/PycharmProjects/

You can run the following command to your directory

cd /mnt/c/apps/PycharmProjects/

6. Run your GPU Tensorflow 2.12 code in Pycharm

Add a new nterceptor to Pycharm, find On WSL.

 Click Next 

Choose the virtual environment you created before, mine is tf2.12. Click Create

Then run the test code.

import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))

operation result

2023-07-18 21:57:44.195176: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:982] could not open file to read NUMA node: /sys/bus/pci/devices/0000:2d:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-07-18 21:57:44.195233: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:982] could not open file to read NUMA node: /sys/bus/pci/devices/0000:2d:00.0/numa_node
Your kernel may have been built without NUMA support.
Num GPUs Available:  1

7. Problems encountered

1. Can’t find libdevice directory ${CUDA_DIR}/nvvm/libdevice

Solution:

Copy the nvvm directory under the conda directory to your running directory

2. Couldn't get ptxas/nvlink version string: INTERNAL: Couldn't invoke ptxas --version

Solution:

conda install -c nvidia cuda-nvcc

References: https://www.tensorflow.org/install/pip

Guess you like

Origin blog.csdn.net/keeppractice/article/details/131776217