TensorFlow core concepts and basic use

TensorFlow core concepts

  Overview: Computation in TensorFlow can be represented as a directed graph, or computational graph, in which each operation will act as a node, and the links between nodes are called edges. This computation graph describes the computation flow of the data. It is also responsible for maintaining and updating the state. Users can perform conditional control and loop operations on the branches of the computation graph. Each node in the computation graph can have any number of inputs and outputs, each node describes an operation, and the node can be regarded as an instantiation of the operation. The data flowing in the edges of the computational graph are called tensors.

  1. Computational graph

  TensorFlow is a programming system that expresses computation in the form of computational graphs. Each computation is a node of the computation graph, and the edges between nodes describe the dependencies between computations.

  2. TensorFlow's data model - tensor

  TensorFlow uses a tensor data structure to represent all data. You can think of a tensor as an n-dimensional array or list. A tensor has a statically typed and dynamically typed dimension. A tensor can be represented in a graph flow between the nodes.

  3. TensorFlow's running model - session

  Sessions are used to perform defined operations. A session owns and manages all the resources of the TensorFlow program runtime. When all computations are complete, the session needs to be closed to help the system reclaim resources. There are generally two ways to use sessions:

   (1) sex = tf.Session ()

    sess .run (...)

    sess.close() 

    //The statement to close the session may not be executed due to abnormal exit

   (2)with tf.Session() as sess:

    sess.run(...)

    //Automatically close the session after execution

  4. Variables

  Variables are a special kind of operation, which can store some tensors that need to be reserved in memory or video memory, such as coefficients in neural networks. When training a model, use variables to store and update parameters. It is common to represent parameters in a statistical model as a set of variables. For example, you can store the weights of a neural network as a variable in a tensor. During training, this tensor is updated by repeatedly running the training graph.

    

Basic use of TensorFlow

  1. Build the diagram

    import tensorflow as tf

    # 创建一个常量 op, 产生一个 1x2 矩阵. 这个 op 被作为一个节点

    # 加到默认图中. 

    # 构造器的返回值代表该常量 op 的返回值.

    matrix1 = tf.constant([[3., 3.]])

    # 创建另外一个常量 op, 产生一个 2x1 矩阵.

    matrix2 = tf.constant([[2.],[2.]])

    # 创建一个矩阵乘法 matmul op , 把 'matrix1' 和 'matrix2' 作为输入.

    # 返回值 'product' 代表矩阵乘法的结果.

product = tf.matmul(matrix1, matrix2)

  2. Start the graph in one session

   This uses the two methods mentioned in the previous session.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325341083&siteId=291194637
Recommended