Campus Video AI Analysis Early Warning System TesnorFlow

The campus video AI analysis and early warning system is trained through the distributed TensorFlow model. The campus video AI analysis and early warning system monitors the behavior of students in real time. The analysis and early warning system will automatically send out an alarm to remind relevant personnel to take timely measures. When deep learning is applied to practical problems, a very difficult problem is that the amount of calculation is too large when training the model. In order to accelerate training, TensorFlow can utilize GPU or/and distributed computing for model training. TensorFlow can specify the device that runs each operation through the td.device function. This device can be the CPU or GPU of the device, or a remote device. When TF generates a session, it is willing to print the device of each operation by setting the tf.log_device_placemaent parameter.

import tensorflow as tf

a = tf.constant([1.0,2.0,3.0],shape=[3],name='a')
b = tf.constant([1.0,2.0,3.0],shape=[3],name='b')
c= tf.add_n([a,b],name="c")

with tf.Session(config=tf.ConfigProto(log_device_placement = True)) as sess:
    print(sess.run(c))

########
Device mapping: no known devices.
c: (AddN): /job:localhost/replica:0/task:0/device:CPU:0
b: (Const): /job:localhost/replica:0/task:0/device:CPU:0
a: (Const): /job:localhost/replica:0/task:0/device:CPU:0

[2. 4. 6.]

In TensorFlow with the GPU environment configured, if the running device is not clearly specified, TF will give priority to GPU.

import tensorflow as tf

a = tf.constant([1.0,2.0,3.0],shape=[3],name='a')
b = tf.constant([1.0,2.0,3.0],shape=[3],name='b')
c= tf.add_n([a,b],name="c")

with tf.Session(config=tf.ConfigProto(log_device_placement = True)) as sess:
    print(sess.run(c))

########
Device mapping: no known devices.
c: (AddN): /job:localhost/replica:0/task:0/device:GPU:0
b: (Const): /job:localhost/replica:0/task:0/device:GPU:0
a: (Const): /job:localhost/replica:0/task:0/device:GPU:0

[2. 4. 6.]

The device on which to run operations can be specified via tf.device.

import tensorflow as tf
with tf.device("/CPU:0"):
    a = tf.constant([1.0,2.0,3.0],shape=[3],name='a')
    b = tf.constant([1.0,2.0,3.0],shape=[3],name='b')
with tf.device("/GPU:0"):
    c= tf.add_n([a,b],name="c")

with tf.Session(config=tf.ConfigProto(log_device_placement = True)) as sess:
    print(sess.run(c))

Certain data types are not supported by the GPU. Forcibly specifying the device will report an error. To avoid this problem. The parameter allow_soft_placement can be specified when creating a meeting. When allow_soft_placement is True, if the operation cannot be run on the GPU, TF will automatically run it on the CPU.

 

Guess you like

Origin blog.csdn.net/KO_159/article/details/131348594