tensorflow2.0 Series (4): Eager Execution and Auto Graph

Malpractice static map

The initial version is run tensorflow static FIG manner, in this embodiment, the calculated calculation map definition and implementation of spaced apart, this is a declarative (declaretive) programming model.

Execution mode still pictures of a lot of advantages, but in the debug indeed very inconvenient (similar call to compiled C language program, at this time we are unable to debug internal), so there is Eager Execution, which TensorFlow v1.5 was first introduced, it has become the core API in version 2.0.

After the introduction of Eager Execution mode, TensorFlow to have similar capabilities Pytorch as dynamic graphical models, we can no longer wait until see.run (*) to see the results, you can easily debug your code at any time in the IDE, see the results of OPs. the introduction of dynamic graph tf also to write the code to bring some new features, you need to pay attention.

Eager mode

Eager bit mode is similar to imperative programming python does not need to compile directly to run, very intuitive.

Eager execution of the basic characteristics

Support for the numpy

Under eager mode support for numpy very friendly and specific features are as follows:

  • Tensor numpy acceptable operation as a parameter;
  • Mathematical operations will tensorflow python objects and converted into the Tensor numpy of arrays;
  • tf.Tensor.numpy the method returns numpy ndarray

E.g:

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Thu Jan  9 10:21:24 2020

@author: [email protected]
"""

import numpy as np
import tensorflow as tf
tf.compat.v1.enable_eager_execution()

def example_of_tf_and_np():

    a = tf.constant([[1,2],[3,4]])
    b = tf.add(a,1)
    
    print(a)
    print(b)
    
    print('tf\'s multiply: ')
    print(a*b)
    
    c = np.multiply(a,b)
    print('numpy\'s multiply:')
    print(c)
    
    print('transfer tensor a to numpy ndarray from: ')
    print(a.numpy())
    
if __name__ == ‘__main__’:
    example_of_tf_and_np()

get:

tf's multiply: 
tf.Tensor(
[[ 2  6]
 [12 20]], shape=(2, 2), dtype=int32)
numpy's multiply:
[[ 2  6]
 [12 20]]
transfer tensor a to numpy ndarray from: 
[[1 2]
 [3 4]]

Although the eager tensorflow model has good compatibility between the tensor and numpy multidimensional data, it is not intended tf.Tensor () defined variables other variables python equivalents. In actual use, we must be careful not to confuse Tensor python variables and objects of tf.

Auto Graph - Dynamic map

eager python flow control mode supports, and also supports the dynamic flow tf, tf for dynamic flow, for a while loop cycle or similar (perhaps used for, if the control), the form:

while x>0:
    x = x-1

In tensorflow control flow can be written in the form of tf.while_loop (..., loop_vars = (x,)) of. However, tf.while_loop can not support an unlimited number of variables, while the figure calculated efficiency tensorflow affected while loop the number of cycles in which all can freely use the while loop.

AutoGraph use static analysis to determine which symbols code modification, in order to convert them to control the flow variables. Static analysis is usually dynamic characteristics --Python performed on a single function limits its effectiveness across functions.

static analysis VS dynamic flow

A visible range local parameters

When the change in the local variables in the function, outside the main function where the change is not visible, similarly, in the method of the class definition, the local variable is changed, the main program is not visible unless these variables explicitly returned as an output parameter. Similarly, the internal parameters for the class member function, the function is not visible outside.

python collections data stream using the control tensorflow

Tf control flow python supports most data structures such as lists, dictionaries and tuples, namedtuple collection of objects comprising the object, but the control flow tf, these variables are promised a fixed structure, that is to say also in the loop , the list can not change the length, the dictionary can not increase or decrease keys. What is namedtuple, you can refer to: https: //docs.python.org/3/library/collections.html#collections.namedtuple


def fn():
  l = []

  def loop_cond(i):
    return i < 10

  def loop_body(i):
    i = i + 1
    l.append(i)
    return i,

  tf.while_loop(
      cond=loop_cond,
      body=loop_body,
      loop_vars=(0,))

  return l

print(fn()) # 输出:[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

tf.function(fn)() # ERROR

Under eager execution code can be run, in tf.function (fn) () on the error. This is because tf.function () will start the graph execution, and tf of the graph execution using a special mechanism to ensure the correctness of the order of operations.

Another example is the following example:

def fnn():
    l = []
    for i in tf.range(10):
      l.append(i)  # Error -- illegal tensor capture!
    return l

In the direct execution mode eager execution ll=fnn(), to give the eager exectuion ll is a tensor list. But equally with tf.function(fnn)()the implementation, given as follows:

InaccessibleTensorError: The tensor ‘Tensor(“placeholder:0”, shape=(), dtype=int32)’ cannot be accessed here: it is defined in another function or code block. Use return values, explicit Python locals or TensorFlow collections to access it. Defined in: FuncGraph(name=while_body_1396, id=5377892048); accessed from: FuncGraph(name=fnn, id=5374487632).

The correct approach should be defined l as tf.TensorArray () type of variable call TensorArray in a loop write () method, the gradual increase in the elements TensorArray. L local parameters when defining assignment of length 0, Int32 data type, and sets the variable length TensorArrary (dynamic_size = True)

def fnn():
    l=tf.TensorArray(tf.int32,size=0,dynamic_size=True)
    for i in tf.range(10):
        l.write(l.size(), i)
    return l
tf.function(fnn)() 

Of course, the above FNN () function can be directly performed with the eager execution mode ( ll=fnn()), is obtained ll

ll
Out[188]: <tensorflow.python.ops.tensor_array_ops.TensorArray at 0x140957d90>

If in the process control contains python collections of tensorflow, index variable, but the structure should be fixed.
E.g:


#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat Jan 18 22:27:11 2020

@author: [email protected]
"""

import tensorflow as tf


if tf.executing_eagerly():
    tf.compat.v1.disable_eager_execution()
    
@tf.function
def dict_loop():
    d = {'a': tf.constant(3)}
    for i in tf.range(10):
      d = {key: value + i for key, value in d.items()}
    return d

@tf.function
def dict_loop2():
    d = {'a': tf.constant(3)}
    for i in tf.range(10):
      for key in d:
        d[key] += i  # Problem -- accessing `dict` using non-constant key
    return d
''
d = dict_loop() # d={'a': <tf.Tensor 'StatefulPartitionedCall_3:0' shape=() dtype=int32>}
但是
d2 = dict_loop2() # ERROR

In this example, dict_loop2 () function in the way defined error, while dict_loop () function is no problem, the official explanation given is should use functional programming (functional style), and in the preparation of the code must pay attention this nuance.

tensorflow flow control tensor data types and dimensions

However, in FIG tf control flow, data type Tensor dimensions and needs to be constant, but it is not available in a limiting Eager exectuion mode, because in eager mode, using the control flow of the python. So the code will go to the next chart patterns from the eager mode, we must pay attention to this issue.

Dynamic and static calculation of dimensions

The shape and rank tensor is defined as follows:

Get static size (static shape) with .shape method, by acquiring a static rank tensor .shape.rank method. When the tensor is dinamic, and its shape should rank respectively with tf.shape (), tf.rank ().

If the code need to use dynamic dimension, there are two methods:
1) @ tf.function can decorator, e.g.

@tf.function(input_signature=(tf.TensorSpec(shape=(None,))))
def f(x):  # x now has dynamic shape
  if tf.shape(x)[0] >= 3:  # Builds a tf.cond
    val = x[4]  # Okay, bounds checks are skipped when the shape is dynamic
  else:
    val = some_default_value

After input_signature here to assign skipped shape the relevant checks when tf execution.
2) control flow in python, add a check is static or dynamic parameters, e.g.

if x.shape[0] is None:  # Python bool, does not use tf.cond
  # ... use x.shape here ...
else:
  # ... use tf.shape(x) here ...

dtype and shape consistency

In tf process, must pay attention to dtype and shape should always be consistent, for example, the following error code:

x = tf.cond(
    tf.random.uniform(()) > 0.5,
    lambda: tf.constant(1, dtype=tf.int32),
    lambda: tf.constant(1, dtype=tf.float32))  # Error -- inconsistent dtypes: int32, float32

# This won't work - "x" changes dtype inside the loop.
x = tf.while_loop(
    lambda _: tf.random.uniform(()) > 0.5,
    lambda x: tf.constant(1, dtype=tf.float32),
    loop_vars=(tf.constant(1, dtype=tf.int32),))  # Error -- inconsistent dtypes: int32, float32
# Example of illegal shape change in a loop:
x = tf.constant(1,)
while tf.random.uniform(()) > 0.5:
  x = tf.constant((1, 2, 3))  # Error -- inconsistent shapes: (), (3,)


If the control flow, or there is an undefined situation None, it will also given.

Accessibility of source code

the eager mode may perform various runtime original code visible, but there are exceptions:
1) python code can not be executed in the interactive environment, e.g. ipython jupyter lab or
2) with native binding function, for example, other languages Code
3) exec or dynamic code execution eval

inspect.getsource (object) can be used to check the reachability code. https://docs.python.org/3/library/inspect.html#inspect.getsource

For lambda type of function, for example:

foo = (
 'bar',
 lambda: x)

This situation is more simple, defined functions in the lambda expression, there is no problem. If there is a nested case should be declared to be called before calling the function, for example:

my_lambda = lambda: x
foo = ('bar', my_lambda)

Eager training mode

First look at this example:

w = tf.Variable([[1.0]])
# 前向计算,得到 loss
with tf.GradientTape() as tape:
  loss = w * w

grad = tape.gradient(loss, w)
print(grad)  # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)

This is eager execution of training mode. In eager mode, you may be used tf.GradientTape track record. Tape can be understood as the image of a tape, do the reverse calculation is equivalent to "rewind." Multiple linear regression with an example:

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat Jan 18 01:36:15 2020

@author: [email protected]
"""
import tensorflow as tf


# A toy dataset of points around 3 * x + 2
# 加一点噪声生成训练数据;
NUM_EXAMPLES = 1000
training_inputs = tf.random.normal([NUM_EXAMPLES,4])
noise = tf.random.normal([NUM_EXAMPLES])

training_outputs = tf.matmul(training_inputs,[[2.7],[3.1],[5.4],[8.9]])+6.5+noise

def prediction(indata, weight, bias):
  return tf.matmul(indata, weight) + bias

# loss 采用均方误差
def loss(weights, biases):
  error = prediction(training_inputs, weights, biases) - training_outputs
  return tf.reduce_mean(tf.square(error))

# Return the derivative of loss with respect to weight and bias
def grad(weights, biases):
  # 前向计算,得到 loss,同时将操作记录到 tape 上,用于计算梯度
  with tf.GradientTape() as tape:
    loss_value = loss(weights, biases)
  # 反向播放 tape,得到梯度;
  return tape.gradient(loss_value, [weights, biases])

train_steps = 300
learning_rate = 0.01
# Start with arbitrary values for W and B on the same batch of data
W = tf.Variable([[0.],[0.],[0.],[0.]])
B = tf.Variable(0.)

print("Initial loss: {:.3f}".format(loss(W, B)))

for i in range(train_steps):
  dW, dB = grad(W, B)
  W.assign_sub(dW * learning_rate) # W = W - dW * learning_rate 
  B.assign_sub(dB * learning_rate) # B = B - dB * learning_rate
  if i % 50 == 0:
    print("Loss at step {:03d}: {:.3f}".format(i, loss(W, B)))

print("Final loss: {:.3f}".format(loss(W, B)))
print("W = {}, B = {}".format(W.numpy(), B.numpy()))

get

Initial loss: 161.488
Loss at step 000: 155.372
Loss at step 050: 23.209
Loss at step 100: 4.175
Loss at step 150: 1.404
Loss at step 200: 0.996
Loss at step 250: 0.936
Final loss: 0.927
W = [[2.6918666]
[3.0815856]
[5.377633 ]
[8.876133 ]], B = 6.478857517242432

Further reading:

tf.Variable () and assign

Variable operator interface: assign ()

W = tf.Variable(10)
W.assign(100) 
with tf.Session() as sess: 
    sess.run(W.initializer)    
    print(W.eval(session=sess))

Print is 10, or 100? ? ?
The answer is: 10

This is because W.assign (100) does not give W assignments, assign () is an op, so it returns an op object, required in the Session run this op object, it will be assigned to W.

W = tf.Variable(10)
assign_op = W.assign(100) 
with tf.Session() as sess:
     sess.run(W.initializer) 
     sess.run(assign_op) 
     print(W.eval())# >> 100

Underlined codes may be omitted, since the initial value assigned assign_op operation can be completed. Indeed, initializer op is a special assign op.

https://cloud.tencent.com/developer/article/1082033

python collections

python package is built collections, it is a very convenient data structure.
Liao Xuefeng specifically refer to the python learning.

发布了111 篇原创文章 · 获赞 118 · 访问量 28万+

Guess you like

Origin blog.csdn.net/happyhorizion/article/details/104043702