Python deep learning specifies the use of a certain GPU, CUDA_VISIBLE_DEVICES

If there are many places in the code that need to specify the use of GPUs, the original writing method does not specify a specific block, which will be very troublesome when changing the code. We can specify a certain block of GPUs through CUDA_VISIBLE_DEVICES. In this way, from the perspective of the program, only the specified block is This GPU is not visible to other GPUs, so other GPUs will not be used.

When running a python script, use the following method to specify the gpu

# run_python.sh
gpu=2
CUDA_VISIBLE_DEVICES=$gpu python gpuProgram.py 

Another way is by specifying in the python file run

import os 
os.environ['CUDA_VISIBLE_DEVICES'] = '2'
Order illustrate
CUDA_VISIBLE_DEVICES=1 Only the GPU numbered 1 is visible to the program. In the code, gpu[0] refers to this GPU.
CUDA_VISIBLE_DEVICES=0,2,3 Only GPUs numbered 0, 2, and 3 are visible to the program. In the code, gpu[0] refers to block 0, gpu[1] refers to block 2, and gpu[2] refers to block 3. piece
CUDA_VISIBLE_DEVICES=2,0,3 Only the GPUs numbered 0, 2, and 3 are visible to the program, but in the code gpu[0] refers to the 2nd block, gpu[1] refers to the 0th block, and gpu[2] refers to the 0th block. 3 pieces

You can refer to setting CUDA_VISIBLE_DEVICES - Tencent Cloud Developer Community - Tencent Cloud (tencent.com)

Guess you like

Origin blog.csdn.net/Zilong0128/article/details/128385109