Several basic concepts TensorFlow programming model

        TensorFlow do calculations using data flow diagram, we are the first to create a data flow diagram (also called a network structure), there is a data flow diagram by a node (node) and the edge (Edge) consisting of a directed acyclic graph (directed acycline graph, DAG). TensorFlow Flow consists of two parts and consisting of Tensor, Tensor (tensor) represents the data flow graph edge, and Flow (flow) This action represents the data flow of operations made by the node in FIG.

1. Edge

      TensorFlow connection relationship two sides: data dependency and control dependency.

        The solid line represents the data-dependent side, represent data, i.e. tensor. Any dimension of the data referred to as tensor. Tensor data stream flowing from front to rear in FIG again to complete a forward propagation (forword propagation), and the residual flows from back again to complete a back-propagation (backword propagation).

        There is also a special edge, generally Videos edge as a dashed line, called the control dependence (control dependency), may be used to control the operation of the operation, which is used to ensure that happens-before relation, there is no such edge data flows, but the source node must be completed prior to the implementation of the destination node started.


 2. Node

        Nodes in the graph is also called operator, which represents an operation (Operation), generally indicates a mathematical applied, may represent a data input (feed in) and an output start point (push out) the end or read / write endpoint persistent variables (persistent variable) is. Operators need to be finalized in the establishment figure.
 

 3. Fig.

import tensorflow as tf

# 创建一个常量运算操作,产生一个 1×2 矩阵
matrix1 = tf.constant([[3., 3.]])

# 创建另外一个常量运算操作,产生一个 2×1 矩阵
matrix2 = tf.constant([[2.],[2.]])

# 创建一个矩阵乘法运算 ,把 matrix1 和 matrix2 作为输入,返回值代表矩阵乘法的结果
product = tf.matmul(matrix1, matrix2)

 4. Session

        Start drawing first step is to create a Session object. Session (session) to provide some method of performing an operation in FIG. The model is general session is established, will be generated, an empty graph, adding nodes and edges in the session, a picture is formed, and then executed.
To create and run a class operation of a map, use tf.Session in the Python API. Examples are as follows:

with tf.Session() as sess:
    result = sess.run([product])
    print("result:", result)

        Output:

result: [array([[12.]], dtype=float32)]

        When calling the Session object run () method executes, some of the Tensor passed, this process is called padding (Feed); return result type depends on the type of input, the process is called to retrieve (fetch).

        FIG session is a bridge interaction, a session can have multiple diagram of the structure of FIG session may be modified, may be injected into the drawing data is calculated . Therefore, the session there are two main API interface: Extend and Run. Extend operation is to add nodes and edges in the Graph, Run operation after the input nodes and computing the necessary padding data, calculates and outputs the operation result.

 5. Equipment

        Device (device) may be used to refer to a calculation and the hardware has its own address space, such as the GPU and the CPU. TensorFlow order to achieve the distributed execution operation, make full use of computing resources, you can explicitly specify which operations are performed on the device. details as follows:

with tf.Session() as sess:
    # 指定在第二个 gpu 上运行
    with tf.device("/gpu:1"):
        matrix1 = tf.constant([[3., 3.]])
        matrix2 = tf.constant([[2.],[2.]])
        product = tf.matmul(matrix1, matrix2)
        print("product:", product)

       We will complain:

...
...
...
  File "D:\software\anaconda\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
    self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): Cannot assign a device for operation MatMul_1: node MatMul_1 (defined at <ipython-input-3-e59e71867305>:6) was explicitly assigned to /device:GPU:1 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device. The requested device appears to be a GPU, but CUDA is not enabled.
	 [[node MatMul_1 (defined at <ipython-input-3-e59e71867305>:6) ]]

        This is because I have not broken computer a GPU ,, the second row into "cpu: 0" you can:

with tf.Session() as sess:
    with tf.device("/cpu:0"):
        matrix1 = tf.constant([[3., 3.]])
        matrix2 = tf.constant([[2.],[2.]])
        product = tf.matmul(matrix1, matrix2)
        print("product:", sess.run(product))

        The results are as follows:

product: [[12.]]

6. variables

        Variable (variable) is a special kind of data, it has a fixed position in the drawing, unlike the ordinary tensor that may flow. For example, create a variable tensor using tf.Variable () constructor, the constructor requires an initial value, the shape and type of the initial value determines the shape and type of the variable:

# 创建一个变量,初始化为标量 0
state = tf.Variable(0, name="counter")

# 创建一个常量张量:
input1 = tf.constant(3.0)

        TensorFlow also provides a mechanism for padding, may be used in the construction of FIG tf.placeholder () tensor any temporary replacement operation, when the run () method call to the Session object executes, using the padding data as the parameter of the call, end call after filling the data disappeared. Code examples are as follows:

import tensorflow as tf
input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
output = tf.multiply(input1, input2)
with tf.Session() as sess:
    print("output:", sess.run([output], feed_dict={input1:[7.], input2:[2.]}))

        The results are as follows:

output:[array([14.], dtype=float32)]

7. Kernel

        We know that the operation (operation) is a generic term for abstract operation (e.g. matmul or add) of the kernel (Kernel) that can be run on a specific device A (e.g., CPU, GPU) implementations of operations. Accordingly, the same operation may correspond to a plurality of cores. When a custom operations, and need a new operation added to the system kernel by being registered.

PS: All the above is taken from << TensorFlow technical analysis and practical Li Jiaxuan Section 4.3 of Chapter IV >>

Guess you like

Origin blog.csdn.net/a857553315/article/details/93376701