Tensorlow: Eager execution

1 definition

It is used for numerical calculations, supports GPU acceleration, and is similar to a Numpy library ; it can automatically differentiate , and is used for machine learning research and experiments . It is a more flexible platform.

2 advantages

  • Compatible with python debugging tools, it can be easily debugged
  • Provides an intermediate error log prompt
  • Allow the use of python data structures
  • It is easy to execute, and is a python-style control flow

With Eager, you no longer have to worry about issues such as placeholders, sessions, control dependencies, "lazy loading", name or variabel scopes.

3 example

Next, let's compare how concise the code executed with eager is. The following is an example of sesseion execution based on the definition of graphs:

import tensorflow as tf

def example():
	x = tf.placeholder(tf.float32, shape=[1,1])
	m = tf.matmul(x,x)
	print(m)
	with tf.Session() as sess:
		m_out = sess.run(m, feed_dict={
    
    x: [[2.]]})
	print(m_out)

The result is [[4.]]. We can see that to execute a simple square number, placeholder definition is required, session execution, and the result of run can be seen to get the result. The first print statement, before run, is Can't see the result.
So let's take a look at the above function, how to implement it with eager execution, as shown in the following code:

import tensorflow  as tf
import tensorflow.contrib.eager as tfe
tfe.enable_eager_execution()

def example():
	x = [[2.]]
	m = tf.matmul(x,x)
	print(m)
if __name__=='__main__':
	example()

The code is much more concise, no longer need to define placeholder, session run results, etc.

3.1 Gradients

The differential is automatically solved and integrated into the eager executor. Let us simply test an example to see how the eager executor solves the gradient:

# -*-coding:utf8 -*-
import tensorflow as tf
import tensorflow.contrib.eager as tfe
tfe.enable_eager_execution()
import sys

def loss(x,y):
	return (y - x**2)**2

def gradients():
	x = tfe.Variable(2.0)
	grad = tfe.implicit_gradients(loss)
	print(loss(x,7.))
	print(grad(x,7.))

if __name__=='__main__':
	gradients()

The running result is:
tf.Tensor(9.0, shape=(), dtype=float32) (loss value)
[(<tf.Tensor: id=59, shape=(), dtype=float32, numpy=-24.0>, < tf.Variable'Variable:0' shape=() dtype=float32, numpy=2.0>)] (gradient value)

4 Reasons to choose eager

  • If you want a more flexible framework that can use python control flow and data structure in the experiment, then eager is a good choice
  • Develop a new model, you can easily print the intermediate error log information when debugging, then eager is a good choice
  • If you are new to tensorflow, choose eager, which allows you to use python to explore more TF APIs. In tensorflow 2.0, the eager executor is used by default.

Guess you like

Origin blog.csdn.net/BGoodHabit/article/details/109174293