TensorFlow of GPU settings

When using GPU version of TensorFlow run the program, if you do not write code special note, the program defaults to occupy all GPU on the host, but the calculation process which will only use one. That is, you look at all the GPU are occupied, I thought it was in GPU parallel computing, but in fact only one of which is running; all other cards are idle, but its memory are occupied, so others will not take. However, this situation by adding three lines of code before the program can be resolved:

import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ['CUDA_VISIBLE_DEVICES'] = "0,1"

This TensorFlow line at beginning of process can be successfully masked system and gpu1 addition gpu0 all GPU device (of course, this number should be determined according to gpu actual situation).

Note that the second line os.environ [ "CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" is also very important, GPU is the number assurance procedures and hardware serial number is the same, without words might cause unnecessary trouble.

If you do not want to use the GPU, so that a third line os.environ [ 'CUDA_VISIBLE_DEVICES'] = "". In this program, all of the GPU devices are blocked, we can only use CPU.

In addition, TensorFlow default the program will take up all the memory in the card, how much memory if you want the program needs to be how much to set it use? Create a session time to add a setting:

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)

 

When running the program you can use nvidia-smi command to look at memory usage.
Original link: https: //blog.csdn.net/byron123456sfsfsfa/article/details/79811286

Guess you like

Origin www.cnblogs.com/yibeimingyue/p/11445905.html