tensorflow op tf.global_variables_initializer

A. Installation
is currently used tensorflow, deeplearning4j two deep learning framework,
has been supporting the python 3.5 before tensorflow, currently updated to 3.6, so the experience of using the latest version.
Long Road slowly: the installation process is as follows
WIN10:
anaconda3.5:
PYTHON3.6:
tensorflow1.4:


Two .TensorFlow basic concepts and principles to understand
the working principle of 1.TensorFlow

TensorFlow is a data flow diagram (data flow graphs) technique for numerical calculation. There is a data flow diagram describes the values ​​calculated in the process of FIG.

Directed graph, nodes usually represent math, edge represents a link between nodes, it is responsible for transmission of multi-dimensional data (Tensors).

Node may be assigned to a plurality of computing devices may perform operations in parallel and asynchronously. Because there after, so before only to wait until the state of the node are calculated to complete the map, the current node to perform the operation.

2.TensorFlow basic usage

Next, according to the official documentation of the specific code, look at the basic usage. You need to understand TensorFlow in how:
Step 5:
A The calculation process shown as FIG.;
. Secondly executes calculation by Sessions;
three Data are expressed as tensors;
IV use Variables to maintain state information;
five were used feeds and fetches data and to fill any gripping operation result

Use TensorFlow, you must understand TensorFlow:

FIG (Graph) to indicate the task
is called a session (Session) context (context) in FIG performed
using tensor data indicating the
maintenance status variable (Variable)
using feed and fetch operations can be assigned an arbitrary (arbitrary operation) or from a wherein acquiring data
in Example 1, to generate three-dimensional data, and then fitting it with a plane:
1
2
. 3
. 4
. 5
. 6
. 7
. 8
the following operations are the official website of the case

import tensorflow as tf
import numpy as np

# 100 randomly generated data NumPy
x_data = np.float32 (np.random.rand (2, 100))
y_data np.dot = ([0.100, 0.200], x_data) + 0.300

Constructing a linear model #
B = tf.Variable (tf.zeros ([. 1]))
W is = tf.Variable (tf.random_uniform ([. 1, 2], -1.0, 1.0))
Y = tf.matmul (W is, x_data) + b

# 最小化方差
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)

# Initialize variables
the init = tf.global_variables_initializer ()
# official website tf.initialize_all_variables () This function will not be used, after March 2017 # 2; with tf.global_variables_initializer () # alternative tf.initialize_all_variables ()
# Start FIG. (Graph)
Sess = tf.Session ()
sess.run (the init)

#-Fit plane
for in STEP xrange (0, 201):
sess.run (Train)
IF STEP == 20 is 0%:
Print STEP, sess.run (W is), sess.run (B)
# output is:
0 [[0.75113136 -0.14751725]] [0.2857058]
20 [[0.06342752 0.32736415]] [0.24482927]
40 [[0.10146417 0.23744738]] [0.27712563]
60 [[0.10354312 0.21220125]] [0.290878]
80 [[0.10193551 0.20427427]] [0.2964265]
100 [[0.10085492 0.201565]] [0.298612]
120 [[0.10035028 0.20058727]] [0.29946309]
140 [[0.10013894 0.20022322]] [0.29979277]
160 [[0.1000543 0.20008542]] [0.29992008]
180 [[0.10002106 0.20003279]] [0.29996923]
200 [[0.10000814 0.20001261]] [0.29998815]
. 1
2
. 3
. 4
5
. 6
. 7
. 8
. 9
10
. 11
12 is
13 is
14
15
16
. 17
18 is
. 19
20 is
21 is
22 is
23 is
24
25
26 is
27
28
29
30
31 is
32
33 is
34 is
35
36
37 [
38 is
39
40
41 is
noted here the following codes, i.e. in front of said main flow 5 step:

Tf.Variable = W is (tf.random_uniform ([. 1, 2], -1.0, 1.0))
Y = tf.matmul (W is, x_data) + B
the init = tf.initialize_all_variables ()
Sess = tf.Session ()
Sess. RUN (the init)
sess.run (Train)
Print STEP, sess.run (W is), sess.run (B)
. 1
2
. 3
. 4
. 5
. 6
. 7
Next look at the specific concepts:
TensorFlow represented by FIG computing tasks, FIG. node is called operation, abbreviated op.
0 or a node obtains a plurality of tensor Tensor, perform calculations, generating zero or more tensors.
FIG must be started in the session (Session), the session (Session) to distribute FIG op to the CPU or a GPU devices, while providing method performed op, the tensor These methods performed, the resulting (Tensor) return.

Construction FIG
Example 2, matrix multiplication:
Import tensorflow TF AS
# create a constant op, the return value 'matrix1' representative of the 1x2 matrix.
Matrix1 tf.constant = ([[. 3, 3.].])

# Create another constant op, the return value 'matrix2' representative of the 2x1 matrix.
Matrix2 tf.constant = ([[2.], [2.]])

# Create a matrix multiplication matmul op, the 'matrix1' and 'matrix2' as an input.
# Return value 'product' on behalf of matrix multiplication.
Product = tf.matmul (Matrix1, Matrix2)
. 1
2
. 3
. 4
. 5
. 6
. 7
. 8
. 9
10
default graph has three nodes, two constant () op, and a matmul () op. in order to actually perform matrix multiplication, matrix multiplication and get results, you must start the session in this figure.

Tensor tensor
multilinear map from a vector space to a real domain (multilinear maps) (v is a vector space, v * is the dual space)
you can Tensorflow the tensor seen as a list or an array of a n-dimensional. A rank tensor contains a static type and a shape.
Rank

In Tensorflow system, the number of dimensions is described as the amount of the order. But the order is not the same concept and order of the tensor matrix. Order tensor is a tensor quantity number of dimensions described below tensor (defined using the list in python) is the second order:

= T [[. 1, 2,. 3], [. 4,. 5,. 6], [. 7,. 8,. 9]]
. 1
You can think of a second order tensor is what we usually call a matrix, a first order tensor can be considered It is a vector. For a second-order tensor, you can use the statement t [i, j] to access any of its elements. As for the third-order tensor you can access any element by t [i, j, k] :


Shape
Tensorflow document uses three marks to easily describe tensor dimension: order, shape and dimension. The following shows the relationship between them:

Data type
in addition to the dimensions, tensor has a data type property. You can specify any of the following types of data types is a tensor:


In a session start drawing
to create a Session object, if created without any parameters, the session will start the default constructor map.
Session is responsible for passing all the required input op, op usually executed concurrently.
# Enable default map.
Sess = tf.Session ()

# Sess call the 'run ()' method, passing in the 'product' as a parameter of the method,
# triggering a figure three op (two constants op and a matrix multiplication op),
# method to show that we want to take back matrix multiplication output of op.
Result = sess.run (Product)

# Return Value 'result' is a numpy `ndarray` objects.
Print Result
# ==> [[12. The]]

# Task is completed, you need to close the session to free up resources.
sess.close ()

Interactive use
in the Python API, used to initiate a session Session View and calls Session.run () method to perform operations.

For ease of use, etc. IPython TensorFlow in an interactive environment, instead of the Session class required by InteractiveSession, using Tensor.eval () and Operation.run () method instead of Session.run ().
. 1
2
3
. 4
. 5
. 6
. 7
. 8
. 9
10
. 11
12 is
13 is
14
15
16
. 17
18 is
. 19
Example 3 is calculated 'x' subtracting 'a':

# TensorFlow enter an interactive session.
Import tensorflow AS TF
sess = tf.InteractiveSession ()

x = tf.Variable([1.0, 2.0])
a = tf.constant([3.0, 3.0])

# Initializer initializer op using the run () method initializes 'X'
x.initializer.run ()

Add a subtraction Sub # op, the 'x' subtracting 'a'. Subtraction operation op, the output
Sub tf.sub = (X, A)
Print sub.eval ()
# ==> [-2. -1. ]

Variable Variable

Tensor used above tensor is constant (constant).
. 1
2
. 3
. 4
. 5
. 6
. 7
. 8
. 9
10
. 11
12 is
13 is
14
15
16
. 17
18 is
variable Variable, maintenance state of the process execution information in FIG. Require it to maintain and update the parameter value is dynamically adjusted.

The following code has tf.initialize_all_variables, in advance of variable initialization,
Tensorflow variables must be initialized, and then have value! And the constant value is not required tensor.

The following assign () operation and the add () operation before calling the run (), it does not actually execute the assignment and processing and operations.

Example 4, implemented using a simple counter variable:

# - Create a variable is initialized to 0. scalar initial definition of the initial
state = tf.Variable (0, name = "counter")

# Create an op, its role is to increase the state 1
One = tf.constant (1)
new_value = tf.add (state, One)
Update = tf.assign (state, new_value)

# After drawing start, initialize variables must go through `` (init) op initialization,
# really assigned to these variables by Tensorflow of initialize_all_variables initial value
init_op = tf.initialize_all_variables ()

# Enable default map, run OP
with tf.Session () AS sess:

# Run 'the init' OP
sess.run (init_op)

# Print 'state' of the initial value
when # retrieved content output operation, you can call executes in the run using the Session object (),
# passing some of the tensor, the tensor will help you retrieve the results.
# Here only to take back to a single node State,
# may be retrieved at run time with a plurality of op Tensor:
# sess.run Result = ([MUL, Intermed])
Print sess.run (State)

# Operation op, updated 'state', and print the 'State'
for _ in the Range (3):
sess.run (Update)
Print sess.run (State)

# Output:

# 0
# 1
# 2
# 3
1
2
. 3
. 4
. 5
. 6
. 7
. 8
. 9
10
. 11
12 is
13 is
14
15
16
. 17
18 is
. 19
20 is
21 is
22 is
23 is
24
25
26 is
27
28
29
30
31 is
32
33 is
34 is
35
36
37 [
above code defines a FIG follows a calculation:

Ok, to sum up, to a clear code:
the process is this: build Map -> Start Figure -> Run Value

Calculated matrix multiplication:

import tensorflow as tf

# 建图
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])

product = tf.matmul(matrix1, matrix2)

# Start FIG
sess = tf.Session ()

# Value of
the Result = sess.run (Product)
Print the Result

sess.close ()
. 1
2
. 3
. 4
. 5
. 6
. 7
. 8
. 9
10
. 11
12 is
13 is
14
15
16
The above describes the basic usage of several codes, by observation, and did not feel tf numpy bit like it.

The difference between them and ordinary TensorFlow contrast Numpy, and look:

eval ()

After a defined in Python, direct printing, you can see a.

In [37]: a = np.zeros((2,2))

The In [39]: Print (A)
[[of 0. The of 0. The]
[of 0. The of 0. The]]
. 1
2
. 3
. 4
. 5
but in the need to explicitly Tensorflow output (evaluation, that is by eval () function)!

In [38]: ta = tf.zeros((2,2))

In [40]: print(ta)
Tensor("zeros_1:0", shape=(2, 2), dtype=float32)

The In [41 is]: Print (ta.eval ())
[[of 0. The of 0. The]
[of 0. The of 0. The]]
. 1
2
. 3
. 4
. 5
. 6
. 7
. 8
temporarily collate
References
References: Geek INSTITUTE
---- -----------------
author: little primary school IT sector
source: CSDN
original: https: //blog.csdn.net/HHTNAN/article/details/78961958
copyright: This article is a blogger original article, reproduced, please attach Bowen link!

Guess you like

Origin www.cnblogs.com/jfdwd/p/11183600.html