linux system utilizing GPU run data (tensorflow)

https://blog.csdn.net/qq_26591517/article/details/82469680

Check the machine GPU case

Command: nvidia-smi

Function: display case on the machine gpu

Command: nvidia-smi -l

Function: the timing to update the display on the machine where gpu

Command: watch -n 3 nvidia-smi

Function: the set refresh time (in seconds) shows the use of GPU

Wherein the upper left side of the number 0, 1, represents the number of GPU, you need to use this number when specifying GPU later.

When the terminal specified program execution GPU

CUDA_VISIBLE_DEVICES=1 python your_file.py

Before you run this network, tells the program only to see No. 1 GPU, the GPU it is not visible to other

Available forms as follows:

CUDA_VISIBLE_DEVICES=1 Only device 1 will be seen
CUDA_VISIBLE_DEVICES=0,1 Devices 0 and 1 will be visible
CUDA_VISIBLE_DEVICES="0,1" Same as above, quotation marks are optional
CUDA_VISIBLE_DEVICES=0,2,3 Devices 0, 2, 3 will be visible; device 1 is masked

CUDA_VISIBLE_DEVICES="" No GPU will be visible

Specified in the Python code GPU

import os

os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0"

GPU provided quantitative amount of

config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.9 # 占用GPU90%的显存
session = tf.Session(config=config)

Set the minimum amount of GPU

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)

Guess you like

Origin www.cnblogs.com/Ann21/p/11087791.html