FIG TensorFlow (FIG custom)

On a ( calculated TensorFlow2.0 of the process and the origin of the name ) in the map we have repeatedly mentioned, but did not see the code defined figure, we did not see anything about the map with codes. In fact, in the TensorFlow, TensorFlow will define the default map. Users can explicitly define the view and FIG custom default FIG.

FIG tf.Operation TensorFlow contains a collection of objects, a tf.Operation object represents a calculation unit, such as addition, subtraction, multiplication and division are tf.Operation objects. FIG TensorFlow further comprises tf.Tensor objects, objects that participates tf.Tensor operation data, the data involved in computing the respective tf.Operation node path and passed the respective paths in FIG.

Code:

import tensorflow as tf
# 定义Python中int类型二维矩阵
A = [[1, 2, 3],
     [4, 5, 6]]
B = [[1, 1], 
     [1, 1],
     [1, 1]]
my_graph = tf.compat.v1.Graph()
with my_graph.as_default():
    # 将Python类型数据A和B传入图中
    A_tf = tf.compat.v1.constant(A, dtype=tf.float32, name="A")
    B_tf = tf.compat.v1.constant(B, dtype=tf.float32, name="B")
    #  构建图中的计算节点
    C_tf = tf.compat.v1.matmul(A_tf, B_tf)
print("C_tf is my_graph:", C_tf.graph is my_graph)
# 图构建完毕
with tf.compat.v1.Session(graph=my_graph) as sess:
    C = sess.run(C_tf)
    print(C)

Output:

C_tf is my_graph: True
[[ 6.  6.]
 [15. 15.]]

In the above code, line 8 of FIG customized, and customized to FIG added to the data and compute nodes 9-14: Line 15 was added from line printing to verify node defined in the figure is correct; first 17 about to custom default FIG tf.Session of FIG. Can be seen from the output, is calculated for each node of the object can be stored in the data object specified in FIG. One thing to note is that the output data computing node may be placed into the input data object in FIG located. The following example shows by a, as follows:

import tensorflow as tf
# 定义Python中int类型二维矩阵
A = [[1, 2, 3],
     [4, 5, 6]]
B = [[1, 1],
     [1, 1],
     [1, 1]]
my_graph1 = tf.compat.v1.Graph()
my_graph2 = tf.compat.v1.Graph()
with my_graph1.as_default():
    # 将Python类型数据A传入图中
    A_tf = tf.compat.v1.constant(A, dtype=tf.float32, name="A")
    # 将Python类型数据B传入图中
    B_tf = tf.compat.v1.constant(B, dtype=tf.float32, name="B")
with my_graph2.as_default():
    # 试图将C_tf放入图my_graph2中
    C_tf = tf.compat.v1.matmul(A_tf, B_tf)

print("C_tf.graph is my_graph1:", C_tf.graph is my_graph1)
print("C_tf.graph is my_graph2:", C_tf.graph is my_graph2)

Output:

C_tf.graph is my_graph1: True
C_tf.graph is my_graph2: False

We can see, even if you specify the C_tf on the map my_graph2, still can not change the fact that C_tf actually stored in my_graph1 in. Furthermore, the matrix is ​​calculated by multiplying the node is not in my_graph2, but will in the my_graph1. Next we define different from FIG see what happens cross-references between the different figures of data and compute nodes.

Code:

import tensorflow as tf
# 定义Python中int类型二维矩阵
A = [[1, 2, 3],
     [4, 5, 6]]
B = [[1, 1],
     [1, 1],
     [1, 1]]
my_graph1 = tf.compat.v1.Graph()
my_graph2 = tf.compat.v1.Graph()
my_graph3 = tf.compat.v1.Graph()
with my_graph1.as_default():
    # 将Python类型数据A传入图中:
    A_tf = tf.compat.v1.constant(A, dtype=tf.float32, name="A")
with my_graph2.as_default():
    # 将Python类型数据B传入图中:
    B_tf = tf.compat.v1.constant(B, dtype=tf.float32, name="B")
with my_graph3.as_default():
    # 构建图中的计算节点:
    C_tf = tf.matmul(A_tf, B_tf)

# 图构建完毕
with tf.compat.v1.Session(graph=my_graph3) as sess:
    C = sess.run(C_tf)
    print(C)

At this time being given, the output being given the following results:

Traceback (most recent call last):
  File "E:/Pycharm专业版/Workspace/Data_Science/gensim_operation/word2vec_test/tensorflow_test/preparation_work/图中数据与计算节点交叉引用.py", line 19, in <module>
    C_tf = tf.matmul(A_tf, B_tf)
  File "E:\Anaconda\Anaconda_Package\lib\site-packages\tensorflow_core\python\util\dispatch.py", line 180, in wrapper
    return target(*args, **kwargs)
  File "E:\Anaconda\Anaconda_Package\lib\site-packages\tensorflow_core\python\ops\math_ops.py", line 2687, in matmul
    with ops.name_scope(name, "MatMul", [a, b]) as name:
  File "E:\Anaconda\Anaconda_Package\lib\site-packages\tensorflow_core\python\framework\ops.py", line 6337, in __enter__
    g_from_inputs = _get_graph_from_inputs(self._values)
  File "E:\Anaconda\Anaconda_Package\lib\site-packages\tensorflow_core\python\framework\ops.py", line 5982, in _get_graph_from_inputs
    _assert_same_graph(original_graph_element, graph_element)
  File "E:\Anaconda\Anaconda_Package\lib\site-packages\tensorflow_core\python\framework\ops.py", line 5917, in _assert_same_graph
    (item, original_item))
ValueError: Tensor("B:0", shape=(3, 2), dtype=float32) must be from the same graph as Tensor("A:0", shape=(2, 3), dtype=float32).

Can be seen from the results given above, different data and figures reference counting each computing node, errors occur. ValueError tips Obviously, i.e., matrix operations when calculating the line 19, called "A: 0" data objects (i.e. objects Tensor) and named "B: 0" data object in the different figures. FIG when building, the individual data objects and computing the current node object in the drawing must be, among different resources can not be cross-referenced FIG.

Note: tf.Graph () constructor is non-thread-safe function, it is necessary to ensure a single thread or external thread safety when you create the map.

Published 105 original articles · won praise 17 · views 110 000 +

Guess you like

Origin blog.csdn.net/qq_38890412/article/details/104058919