TensorFlow study notes (four) - Getting Started - Basic use

Tutorial Address: TensorFlow Chinese community

Basic use

Use TensorFlow, you must understand TensorFlow:

  • FIG (Graph) to represent the computing tasks.
  • In called  会话 (Session) executes context (context) in the.
  • Use tensor representation of data.
  • By  变量 (Variable) maintaining state.
  • Using feed and fetch operations can be assigned an arbitrary (arbitrary operation) or to obtain data therefrom.

Overview

TensorFlow a programming system is used to represent the computing tasks FIG. Nodes in the graph is called  op  (abbreviation of operation). Op obtain a 0 or more  Tensor, perform calculations, produce 0 or more  Tensor. Each Tensor is a type of multi-dimensional array. For example, you can set a group image is represented as a four-dimensional array of floating-point numbers, these four dimensions are  [batch, height, width, channels].

A TensorFlow FIG describes the process of calculation in order to calculate, FIG must  会话 be started in.  会话 The distribution of FIG op to such as a CPU or a GPU  设备 on, while providing a method for performing of op. These methods performed, the resulting Back in Python language tensor, the tensor is returned.  numpy ndarray  objects; in C and C ++ language, the tensor is returned tensorflow::Tensor instance.

Figure computing

TensorFlow programs are typically organized into a phase and an execution phase of the build .

  • During the build phase, op step is described as a FIG.
  • In the execution phase, using the session execution figures op.

For example, usually create a chart in the build phase to represent and train the neural network, and then repeatedly execute training op figure during the implementation phase.

TensorFlow supports C, C ++, Python programming language. Currently, TensorFlow Python library easier to use, it provides a number of auxiliary functions to simplify the work of constructing graphs, these functions have not been supported by the C and C ++ libraries.

Trilingual library session (session libraries) are the same.

Construction diagram

FIG first step in building is to create a source op (Source op). No input op source, for example  常量 (Constant)source is transmitted to the output of op op do other operations.

Python library, op constructor is configured to return a value representing the output of op, the return value can be passed to other op configured as an input.

TensorFlow Python library has a default chart (Graph default) , OP constructor can increase its nodes. The default graphs for many programs has been sufficient. Read the  Graph class  documentation to learn how to manage multiple map.

import tensorflow as tf

# 创建一个常量 op, 产生一个 1x2 矩阵. 这个 op 被作为一个节点
# 加到默认图中.
#
# 构造器的返回值代表该常量 op 的返回值.
matrix1 = tf.constant([[3., 3.]])

# 创建另外一个常量 op, 产生一个 2x1 矩阵.
matrix2 = tf.constant([[2.],[2.]])

# 创建一个矩阵乘法 matmul op , 把 'matrix1' 和 'matrix2' 作为输入.
# 返回值 'product' 代表矩阵乘法的结果.
product = tf.matmul(matrix1, matrix2)

The default map now has three nodes, two  constant() op, and a matmul() op. In order to truly be the result of matrix multiplication, and get the matrix multiplication, you must start the session in this figure.

In a session start FIG.

After the completion of the construction phase, to start drawing. The first step is to create a boot chart  Session objects, if created without any parameters, the session will start the default constructor map.

For complete session API, please read the Session class .

# 启动默认图.
sess = tf.Session()

# 调用 sess 的 'run()' 方法来执行矩阵乘法 op, 传入 'product' 作为该方法的参数. 
# 上面提到, 'product' 代表了矩阵乘法 op 的输出, 传入它是向方法表明, 我们希望取回
# 矩阵乘法 op 的输出.
#
# 整个执行过程是自动化的, 会话负责传递 op 所需的全部输入. op 通常是并发执行的.
# 
# 函数调用 'run(product)' 触发了图中三个 op (两个常量 op 和一个矩阵乘法 op) 的执行.
#
# 返回值 'result' 是一个 numpy `ndarray` 对象.
result = sess.run(product)
print result
# ==> [[ 12.]]

# 任务完成, 关闭会话.
sess.close()

Session Object need to be closed after use to release the resources. In addition to an explicit call to close, can be used "with" block of code to automatically shut down the operation.

with tf.Session() as sess:
  result = sess.run([product])
  print result

In the realization, TensorFlow converted into graphic definition distributed execution operation, to take full advantage of available computing resources (e.g., CPU or GPU). Usually you do not need to explicitly specify CPU or GPU, TensorFlow automatically detected. If the detected GPU, TensorFlow will use the first GPU to find as much as possible to perform the operation.

If more than one GPU is available on the machine, other GPU except the first default is not involved in the calculation. In order to use these TensorFlow GPU, you have to execute op explicitly assigned to them.  with...Device Statement is used to assign specific CPU or GPU do:

with tf.Session() as sess:
  with tf.device("/gpu:1"):
    matrix1 = tf.constant([[3., 3.]])
    matrix2 = tf.constant([[2.],[2.]])
    product = tf.matmul(matrix1, matrix2)
    ...

Device identification string is currently supported devices include:

  • "/cpu:0": CPU machine.
  • "/gpu:0": The first GPU of the machine, if any.
  • "/gpu:1": Machine's second GPU, and so on.

Read using the GPU section for more information TensorFlow GPU use.

Interactive use

Python sample document using a session  Session to start map, and invokes  Session.run() a method to perform operations.

For ease of use, such as a  IPython  like Python interactive environment, it may be used  InteractiveSession instead of the  Session category, the use  Tensor.eval() and  Operation.run() methods place  Session.run(). This avoids the use of a variable to hold the session.

# 进入一个交互式 TensorFlow 会话.
import tensorflow as tf
sess = tf.InteractiveSession()

x = tf.Variable([1.0, 2.0])
a = tf.constant([3.0, 3.0])

# 使用初始化器 initializer op 的 run() 方法初始化 'x' 
x.initializer.run()

# 增加一个减法 subtract op, 从 'x' 减去 'a'. 运行减法 op, 输出结果 
sub = tf.subtract(x, a)
print sub.eval()
# ==> [-2. -1.]

Tensor

TensorFlow tensor program uses data structure to represent all of the data to calculate the figures, all data transfer operations between the tensor. You can TensorFlow tensor regarded as a n-dimensional array or list of A Rank tensor contains a static type, and a shape. TensorFlow want to learn how to deal with these concepts, see  Rank, Shape, and Type .

variable

The Variables  for the Details More. Variables to maintain state information in the process of drawing executed. The following example demonstrates how to use variables to achieve a simple counter. See the  variables  section for more details.

# 创建一个变量, 初始化为标量 0.
state = tf.Variable(0, name="counter")

# 创建一个 op, 其作用是使 state 增加 1

one = tf.constant(1)
new_value = tf.add(state, one)
update = tf.assign(state, new_value)

# 启动图后, 变量必须先经过`初始化` (init) op 初始化,
# 首先必须增加一个`初始化` op 到图中.
init_op = tf.initialize_all_variables()

# 启动图, 运行 op
with tf.Session() as sess:
  # 运行 'init' op
  sess.run(init_op)
  # 打印 'state' 的初始值
  print sess.run(state)
  # 运行 op, 更新 'state', 并打印 'state'
  for _ in range(3):
    sess.run(update)
    print sess.run(state)

# 输出:

# 0
# 1
# 2
# 3

Code  assign() operation is part of the expression map depicted as  add() operating the same. So call  run() before executing the expression, it does not actually perform the assignment.

Will usually a statistical model parameter is represented as a set of variables. For example, you can right a neural network weights in a tensor In the training process, by repeatedly running training map, update this tensor as a variable storage.

Fetch

To retrieve the contents of the output operation, you can use  Session an object  run() when you call executes, passing some of the tensor, the tensor will help you retrieve the results. In the previous example, we only retrieve a single node  state, but you can also retrieve multiple tensor:

input1 = tf.constant(3.0)
input2 = tf.constant(2.0)
input3 = tf.constant(5.0)
intermed = tf.add(input2, input3)
mul = tf.mul(input1, intermed)

with tf.Session():
  result = sess.run([mul, intermed])
  print result

# 输出:
# [array([ 21.], dtype=float32), array([ 7.], dtype=float32)]

Tensor need to get more value and get together in a single run in the op (rather than one by one to acquire tensor).

Feed

The above examples is introduced in the calculation of the figure of the tensor, in the form of stored constants or variables. TensorFlow also provides a feed mechanism, which can be any of the following temporary replacement figures in tensor may submit a patch for drawing any operation directly into a tensor.

feed using a tensor value of the output of a temporary replacement operation. You can provide feed data as a  run() parameter of the call. feed only within the calling its methods effective method ends, feed will disappear. The most common use case is to some special operation is designated as "feed" operation, method of labeling is to use tf.placeholder () creates a placeholder for these operations.


input1 = tf.placeholder(tf.types.float32)
input2 = tf.placeholder(tf.types.float32)
output = tf.mul(input1, input2)

with tf.Session() as sess:
  print sess.run([output], feed_dict={input1:[7.], input2:[2.]})

# 输出:
# [array([ 14.], dtype=float32)]

for a larger-scale example of feeds . If no proper feed,  placeholder() operation will produce an error.  MNIST fully connected feed course  ( Source code ) gives an example of using a larger feed.

Published 47 original articles · won praise 121 · views 680 000 +

Guess you like

Origin blog.csdn.net/guoyunfei123/article/details/82762331