table of Contents
1 TensorBoard
First, let's test a simplest example to see how to use TensorBoad, the following code:
# -*-coding:utf8 -*-
import tensorflow as tf
import os
os.environ['TF_CPP_MIN_LOG_LEVRL']='2'
def add():
a = tf.constant(2, name='a')
b = tf.constant(3, name='b')
x = tf.add(a,b, name='add')
write = tf.summary.FileWriter('./graphs', tf.get_default_graph())
with tf.Session() as sess:
print(sess.run(x))
if __name__=='__main__':
add()
Next, let's see how to call tensorboard. First, execute the program, and then the graph structure will be stored in the directory graphs, and then turn on tensorboard:
tensorboard --logdir="./graphs" --port 6006
Then open the local browser: http://localhost:6006/
Next, we can see the picture we defined, as shown below:
TensorBoard can not only visualize the graph structure, but also do many other important things. During the model training process, the loss, learning rate and other changes can be displayed graphically to better analyze the problem.
2 Constants
2.1 Custom constants:
tf.constant(
value,
dtype=None,
shape=None,
name='Const',
verify_shape=False
)
Let's look at a simple program:
# -*-coding:utf8 -*-
import tensorflow as tf
def constant():
a = tf.constant([2, 2], name='a')
b = tf.constant([[0,1],[2,3]], name='b')
x = tf.multiply(a, b, name='mul')
with tf.Session() as sess:
print(sess.run(x))
if __name__=='__main__':
constant()
The results of the operation are as follows:
[[0 2]
[4 6]]
2.2 Special value constants
We can also call some special functions, mainly the following types:
- tf.zeros(shape, dtype=tf.float32, name=None)
eg: tf.zeros([2,3], t.int32) ==> [[0, 0, 0], [0, 0, 0]]
- tf.zeros_like(input_tensor, dtype=None, name=None, optimize=True)
eg: input_tensor is [[0,1],[2,3],[4,5]]
tf.zeros_like(input_tensor) ==> [[0,0],[0,0],[0,0]]
- tf.ones(shape, dtype=tf.float32, name-None)
- tf.ones_like(input_tensor, dtype=None, name=None, optimize=True)
- tf.fill(dims, value, name=None)
tf.fill([2,3], 8) ==> [[8,8,8],[8,8,8]]
- tf.lin_space(start, stop, num, name=None)
tf.lin_space(10.0, 13.0, 4) ==> [10. 11. 12. 13.]
- tf.range(start, limit=None, delta=1, dtype=None, name=‘range’)
tf.range(3, 18, 3) ==> [3 6 9 12 15]
tf.range(5) ==> [0 1 2 3 4]
2.3 Random constant
- tf.random_normal
- tf.truncated_normal
- tf.random_uniform
- tf.random_shuffle
- tf.random_crop
- tf.multinomial
- tf.random_gamma
3 Oprations
tensorflow customizes many Ops, which are mainly divided into the following categories according to functional categories:
category | Commonly used functions |
---|---|
element-wise numerical calculation | Add, Sub, Mul, Div, Exp, Log, Greater, Less, Equal, … |
array array calculation | Concat, Slice, Split, Constant, Rank, Shape, Shuffle, … |
Matrix calculation | MatMul, MatrixInverse, MatrixDeterminant, … |
State operation | Variable, Assign, AssignAdd, … |
Neural network function | SoftMax, Sigmoid, ReLU, Convolution2D, MaxPool, … |
Queue operation | Enqueue, Dequeue, MutexAcquire, MutexRelease, … |
Control flow operation | Merge, Switch, Enter, Leave, NextIteration |
3.1 Numerical calculation Ops
These functions are very similar to the functions in numpy.
3.2 Data types in tensorflow
The main data types in tensorflow are: Boolean (boolean), numeric (int, float) and string (strings).
Tensorflow integrates the data structure of Numpy, tf.int32 is equal to np.int32. In tensorflow, it should be noted that constant values are stored in the definition of the graph, so if there are many constant values, loading the graph will be costly. The following code:
import tensorflow as tf
const = tf.constant([1.0, 2.0], name="const")
with tf.Session() as sess:
print(sess.graph.as_graph_def())
The results of the operation are as follows: the
constant values are stored in the definition of the graph. Therefore, when the amount of data is large, it is generally defined by variables or readers.
4 Variables
There are two main forms of defining parameters:
4.1 tf.Variable
Defined as follows:
s = tf.Variable(2, name="scalar")
m = tf.Variable([[0,1],[2,3]], name="matrix")
W = tf.Variable(tf.zeros([784, 10]))
tf.constant is just an op, tf.Variable is a class, composed of multiple
ops , for example: x = tf.Variable(...)
This process initializes x.initializer, reads the value x.value(), and writes the value x. assign(...) and other ops.
4.2 tf.get_variable
s = tf.get_variable("scalar", initializer=tf.constant(2))
m = tf.get_variable("matrix", initializer=tf.constant([[0,2],[2,3]]))
W = tf.get_variable("big_matrix", shape=(784, 10), initializer=tf.zeros_initializer())
Although both tf.Variable and tf.get_variable can define parameters, it is recommended to use tf.get_variable, which can share variables and provide more flexible initialization values.
Next, let's print the value of parameter W and take a look:
with tf.Session() as sess:
print(sess.run(W))
The results are as follows: an
error was reported! When you want to execute an op, you must initialize variables first. The easiest way is to initialize all variables at once:
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
Initialize some parameter variables:
with tf.Session() as sess:
sess.run(tf.variables_initializer([s,m]))
Initialize a parameter variable:
with tf.Session() as sess:
sess.run(W.initializer)
After initialization, let’s take a look at the printing parameter values:
with tf.Session() as sess:
print(W.eval())
The results are as follows:
print(W.eval()) and print(sess.run(W)) are equivalent, and both print out parameter values.
4.3 tf.Variable.assign()
import tensorflow as tf
W = tf.Variable(10)
W.assign(100)
with tf.Session() as sess:
sess.run(W.initializer)
print(W.eval())
What is the value of the parameter W in the above program? The answer is: 10
seems to have no effect, because W.assign(100) just defines an op. If you want its effect, you must execute it in a session. The change is as follows:
import tensorflow as tf
W = tf.Variable(10)
assign_op = W.assign(100)
with tf.Session() as sess:
sess.run(W.initializer)
sess.run(assign_op)
print(W.eval())
The result is finally correct: 100
5 Placeholder
In tensorflow, there are generally two steps:
- Integrated construction of a graph
- Use session to execute operations in the graph
In the first step, the integrated construction of a graph does not require real values for calculation. For example: when we define a function, f (x, y) = 2 ∗ x + yf(x,y)=2*x+yf(x,and )=2∗x+y , we don’t need to knowxxx和yythe value of y ,xxx和yyy is just a placeholder.
Next we look at the placeholder function in tensorflow:
tf.placeholder(dtype, shap=None, name=None)
Let's see how to use placeholder:
a = tf.placeholder(tf.float32, shape=[3])
b = tf.constant([5, 5, 5], tf.float32)
c =tf.add(a, b)
with tf.Session() as sess:
print(sess.run(c, feed_dict={
a: [1,2,3]}))
The output value is: [6, 7, 8], use feed_dict to send any tensor that needs to be passed.