tensorflow 2.0 Series (3): eager execution and FIG calculated

tf 2.0 new features: eager execution

tensorflow classic way is to calculate the need to build FIG When the session starts, the flow tensor is calculated in FIG. One of the most important feature in tf 2.0 is that eager execution, greatly simplifying the cumbersome before the definition and calculation process.

eager execution provides an imperative programming environment, it simply is not need for constructing a map, it can be run directly. In the old version, we need to build a computation graph, and then executed. After r1.14, tensorflow introduced eager execution, it has facilitated the rapid build and test algorithms model.

Open eager mode

tf 2.0, the default, Eager Execution is enabled. Can tf.executing_eargerly()see the Eager Execution current startup state, returns True is open, False is closed. You can tf.compat.v1.enable_eager_execution()start eager mode.

import tensorflow.compat.v1 as tf

tf.enable_eager_execution()
print('eager execution ? '+ str(tf.executing_eagerly()))       # => True

x = [[1.]]
m = tf.add(x, x)
print("\nhello, tensorflow r%.1f"%m)  # => "hello, [[4.]]"

Close eager mode

Close eager mode functions are tf.compat.v1.disable_eager_execution().

Construction diagram

tf graph is a very important concept, definition of the calculation process of the whole network in the drawings, FIG model also made sense encapsulation and isolation, independently of each other such that multiple models can be run independently. First we look at the default map, and then look at how to customize the map.

The default map

In tensorflow, the system maintains a default FIG calculated by tf.get_default_graph () function to obtain the current default FIG calculation, calculation map all nodes can be regarded as an arithmetic operation op or tensor. After you define a good view of computing, tf statement is not executed immediately; but must wait until a session open session, will perform session.run () statements. If you run other nodes involved, it will perform to.
By tf.get_default_graph () function to obtain the current default calculation map, to add to the default operation of a computation graph, we simply need to call a function:
** Example 1: Default FIG **


c = tf.constant(3.0) 
assert c.graph == tf.get_default_graph() 

with tf.Session(graph=c.graph) as sess:
    print(sess.run(c))

Get the output 3.0.

Creating multiple calculation chart

May also be used tf.compat.v1.Graph () function to generate a new calculation of FIG. In various computation graph, and tensor operations are not shared, independent of each other.
** Example 2: Create two computing FIG **

g1 = tf.Graph()
    # 在图g1中定义初始变量c, 并设置初始值为0
    v = tf.get_variable("v", shape=[3], initializer = tf.zeros_initializer(dtype=tf.float32))

g2 = tf.Graph()
with g2.as_default():
    # 在图g1中定义初始变量c, 并设置初始值为1
    v = tf.get_variable("v", shape=[4], initializer = tf.ones_initializer(dtype=tf.float32))

with tf.Session(graph=g1) as sess:
    sess.run(tf.global_variables_initializer())
    with tf.variable_scope('', reuse=True):
        # 输出值为0
        print(sess.run(tf.get_variable("v")))

with tf.Session(graph=g2) as sess:
    sess.run(tf.global_variables_initializer())
    with tf.variable_scope('', reuse=True):
       # 输出值为1
       print(sess.run(tf.get_variable('v')))

Output:

[0. 0. 0.]
[1. 1. 1. 1.]

Start Session

Example 3: implemented with an accumulator tensorflow, sequentially outputs 1,2,3,4,5

import tensorflow.compat.v1 as tf

if ~tf.executing_eagerly():
    tf.disable_eager_execution()
# 逐个输出1,2,3,4,5
# 定义变量节点x,常量节点one,op节点x_new
x = tf.Variable(0, name='counter')  
one = tf.constant(1)
x_new = tf.add(x, one)
# 赋值操作,更新 x
update = tf.assign(x, x_new)

# 计算图初始化
init = tf.initialize_all_variables()
# 启动会话
with tf.Session() as sess:
    sess.run(init)
    for _ in range(5):
        sess.run(update)
        print(sess.run(x))

Calculated from the formula to FIG.

After understanding the tensor calculation method of defining and building map, we will find that, in fact, not just a tensorflow depth learning neural network algorithm library, in fact, the tool is a data stream (DFP) is. For example, in the tensorflow, Tensor can support a variety of basic mathematical calculations, various functions may be implemented in FIG construct / equations, with c or matlab algorithm can be used to achieve tensorflow implemented. Let's take a simple example, heart-shaped curve drawn.

Drawing cardoid

We tried to draw the heart-shaped curve with two different methods. Two common formulas heart-shaped curve is:
Here Insert Picture Description

We use the variables defined type and placeholder type tensor in two ways, tf is calculated constructed FIG.

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Jan  6 10:04:01 2020

@author: [email protected]

"""

import tensorflow as tf


if ~tf.compat.v1.executing_eagerly():
    tf.compat.v1.disable_eager_execution()
    
import numpy as np
import matplotlib.pyplot as plt

# 变量型张量   
def variable_tensor(x):

    vx = tf.Variable(x)
    y_part1 = 0.618*tf.math.abs(vx)
    y_part2 = 0.8*tf.math.sqrt(64-tf.math.square(vx))
    
    part1 = tf.math.add(y_part1, y_part2)
    part2 = tf.math.subtract(y_part1, y_part2)
    
    init_op = tf.compat.v1.global_variables_initializer()
    
    with tf.compat.v1.Session() as sess:
        sess.run(init_op)
        y1 = sess.run(part1)
        y2 = sess.run(part2)
    return y1, y2


# 占位符型张量
def placeholder_tensor(data):
    t = tf.compat.v1.placeholder(tf.float32, shape=(100,))
    tx = 2*tf.math.cos(t) - tf.math.cos(2*t)
    ty = 2*tf.math.sin(t) - tf.math.sin(2*t)
    
    with tf.compat.v1.Session() as sess:
        y = sess.run(tx, feed_dict={t:data})
        x = sess.run(ty, feed_dict={t:data})    
    return x,y

if __name__ == "__main__":
    # 心形曲线1
    x = np.linspace(-8,8,100)
    y1, y2 = variable_tensor(x)
    plt.subplot(1,2,1),plt.plot(x, y1, color = 'r')
    plt.subplot(1,2,1),plt.plot(x, y2, color = 'r')
    plt.axis('equal')

    # 心形曲线2
    data = np.linspace(0, 2*np.pi, 100)
    x, y = placeholder_tensor(data)
    plt.subplot(1,2,2),plt.plot(x, y, color = 'r')
    plt.axis('equal')
    plt.show()

get:
Here Insert Picture Description

Square root

Square root is a common algorithm interview questions. Entitled: given number a, without seeking the call sqrt () is a function of the square root of the number. This question is the classic approach is to use Newton iterative method, of course, can also be used to achieve tensorflow (least squares).


#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Jan  6 13:50:45 2020

@author: [email protected]
"""


import tensorflow as tf

if ~tf.compat.v1.executing_eagerly():
    tf.compat.v1.disable_eager_execution()

# 牛顿迭代法
def solve_root(a):  
    x0 = a
    while (x0*x0-a)>1e-5:
        x0 = (x0+a/x0)/2
        
    return x0

# 
def tf_solve_root(a):
    x = tf.Variable(a,dtype=tf.float32)
    x = tf.compat.v1.placeholder(shape=(None,10), dtype=tf.float32)
    x2 = tf.math.multiply(x,x)
    loss = tf.math.square(x2-a)
    opt = tf.compat.v1.train.GradientDescentOptimizer(learning_rate=0.1)
    train = opt.minimize(loss)

    with tf.compat.v1.Session() as sess:
        sess.run(x.initializer)
        for step in range(200):
            sess.run(train, feed_dict={x:a})
            root_a = sess.run(x)
            
    return root_a
            

if __name__=='__main__':
    print(solve_root(2))  
    print(tf_solve_root())
    

get

1.4142156862745097
1.4142137

Linear Regression

tensorflow build linear regression linear regression model.

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Jan  6 16:15:28 2020

@author: [email protected]
"""

import tensorflow as tf
import numpy as np



if ~tf.compat.v1.executing_eagerly():
    tf.compat.v1.disable_eager_execution()
    
    
x_data = np.random.randn(2000,3)
w_real = [0.4, 0.5, 0.12]
b_real = 0.3

noise = np.random.randn(1,2000)*0.1
y_data = np.matmul(w_real,x_data.transpose())+b_real+noise

NUM_STEPS= 20

g  = tf.Graph()
wb_ = []
with g.as_default():
    x = tf.compat.v1.placeholder(tf.float64, shape=[None, 3])
    y_true = tf.compat.v1.placeholder(tf.float64, shape=None)
    
    w = tf.Variable([[0,0,0]], dtype = tf.float64)
    b = tf.Variable(0, dtype=tf.float64)
    y_pred = tf.matmul(w, tf.transpose(x))+b
    
    loss = tf.reduce_mean(tf.square(y_true-y_pred))
    
    optimizer = tf.compat.v1.train.GradientDescentOptimizer(0.5)
    train = optimizer.minimize(loss)
    
    init = tf.compat.v1.global_variables_initializer()
    with tf.compat.v1.Session() as sess:
        sess.run(init)
        for step in range(NUM_STEPS):
            sess.run(train, {x:x_data, y_true:y_data})
            wb_.append(sess.run([w,b]))
            print(step,sess.run([w,b]))


Published 111 original articles · won praise 118 · views 280 000 +

Guess you like

Origin blog.csdn.net/happyhorizion/article/details/103849408