TensorFlow use GPU

TensorFlow default occupy all the GPU memory and all devices of each GPU; GPU if a certain block is specified, the default will be occupied by all of the one-time memory of the GPU. Can be resolved in the following ways:

1 Python code set environment variables, designated GPU

OS Import 
os.environ [ "CUDA_VISIBLE_DEVICES"] = "2" except the third block # Specify GPU

2 system environment variables specified in GPU

# 2 only the first GPU, on demo_code.py, the machine becomes second block GPU "/ gpu: 0", but in all the running / gpu: 0 operation will be placed on the second block GPU 
Python demo_code.py. 1 = CUDA_VISIBLE_DEVICES 

# only the first GPU and a second block GPU 
CUDA_VISIBLE_DEVICES = 0,1 Python demo_code.py

Dynamic memory allocation GPU 3

# Allow_soft_placement = True is not running on the GPU, then the CPU 
config = tf.ConfigProto (allow_soft_placement = True, log_device_placement = True) 

config.gpu_options.allow_growth = True #-demand video memory 

with tf.Session (config = config) AS sess: 
    sess .run (...)

4 is assigned a fixed percentage of memory

# Allocated in a fixed ratio. 
= tf.ConfigProto config (allow_soft_placement = True, log_device_placement = True) 
# The following code will occupy 40% of all the memory of the GPU may be used 
config.gpu_options.per_process_gpu_memory_fraction = 0.4 

with tf.Session (config = config) AS Sess: 
    sess.run (...)

After setting my device GPU occupancy as follows:

gz_6237_gpu             Sat Feb 15 23:01:56 2020  418.87.00
[0] GeForce RTX 2080 Ti | 43'C,   0 % |  4691 / 10989 MB | dc:python/1641(4681M)

5 will be assigned to a particular computing device through tf.device

with tf.device("/gpu:0"):
    b = tf.Variable(tf.zeros([1]))
    W = tf.Variable(tf.random_uniform([1, 2], -1.0, 1.0))
    y = tf.matmul(W, x_data) + b

This way is not recommended . TF China's kernel defined what actions you can run on the GPU, which can not, therefore enforce the GPU will reduce the portability of the program.

The recommended approach is : When you create a session, specify the parameters allow_soft_placement = True; if this operation can not be performed on the GPU, TF will automatically place it on the CPU.

config = tf.ConfigProto(allow_soft_placement=True)

with tf.Session(config=config) as sess:
    sess.run(...)

 

Guess you like

Origin www.cnblogs.com/zingp/p/12315366.html