Tensorflow use GPU acceleration

Test faster-rcnn, cpu slower calculation, the code to adjust the accelerator operation gpu

  • Will be  with tf.Session () as sess:  Replace
1 gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.9)
2 with tf.Session(config=tf.ConfigProto(gpu_options=gpu_options,log_device_placement=True),graph=detection_graph) as sess:
3     with tf.device("/gpu:0"):

After the emergence of video memory is full, while GPU utilization rate is zero, the access to official documents that " the parameters on the GPU, tf.Variable supported only real type (float16 float32 double) is not supported integer parameter "

The CPU calculates almost filled, the space visible tensorflow running in memory, calculates the actually executed on cpu

With the following code can be used to detect tensorflow Equipment:

1 from tensorflow.python.client import device_lib
2 print(device_lib.list_local_devices()) 

Tensorflow program may be run to specify a device operated by a tf.device each function, the device may be a local CPU or the GPU, may be a remote server.

tf.device function calculation performed may be specified by the device name of the device.

The CPU in the name of the tensorflow / cpu: 0. By default, even if the machine has a number of CPU, tensorflow not distinguish between them, use all the CPU / cpu: 0 as the name.
The names of the different GPU on a different machine, the n-th GPU in the name of tensorflow / gpu: n.
tensorflow provides a way to view an accounting device to run each operation. When generating a session, the device can print run of each operational parameter by setting log_device_placement.

. 1  Import tensorflow TF AS 
 2 A = tf.constant ([1.0,2.0,3.0], Shape = [. 3], name = ' A ' )
 . 3 B = tf.constant ([1.0,2.0,3.0], Shape = [ . 3], name = ' B ' )
 . 4 C = a + B
 . 5  # output by log_device_placement parameter computing device running each 
. 6 Sess = tf.Session (config = tf.ConfigProto (log_device_placement = True))
 . 7  Print (Sess. run (c))

In the above code, tensorflow program log_device_placement = True parameters added when generating a session, the output of each program will run a device operation to the screen.

In configured GPU environment, if the operation does not explicitly specify the operation of the equipment, then tensorflow prefers GPU. However, despite the four GPU, by default, tensorflow operational priority will only put / gpu: 0. If you need to put some operators different GPU or CPU, it is necessary to specify by tf.device manually.

. 1  Import tensorflow TF AS
 2  
. 3 A tf.Variable = (0, name = ' A ' )
 . 4 with tf.device ( ' / GPU: 0 ' ):
 . 5      B tf.Variable = (0, name = ' B ' )
 6  # by allow_soft_placement parameter can not be automatically placed on the back on the GPU operating the CPU 
. 7 Sess = tf.Session (config = tf.ConfigProto (allow_soft_placement = True, log_device_placement = True))
 . 8 sess.run (tf.initialize_all_variables ()

It can be seen in the above code generation operation of the constants a and b are loaded into the CPU, while the addition operation is placed on the second GPU. In tensorflow, not all of the operations can be placed on the GPU, if forced operation will not be put on the GPU assigned to the GPU, the program will error.
On the GPU, tf.Variable supported only real number type (float16 float32 double) parameters. It does not support integer parameters. tensorflow allow_soft_placement parameters can be specified when generating a session. When this parameter is set to True, if the operation can not be performed by the GPU, then tensorflow it will automatically put it on the CPU.

Improve the way To be continued ......

ref:https://blog.csdn.net/VioletHan7/article/details/82769531

Guess you like

Origin www.cnblogs.com/wind-chaser/p/11348564.html