Tensorflow2 - Introduction and application of Eager mode

TensorFlow's eager execution mode is an important programming environment that evaluates operations immediately without building a graph: operations return values ​​in real time, rather than building a computation graph and then running it. This makes using TensorFlow and debugging models simpler and can reduce a lot of boilerplate code.

The eager execution mode is a flexible machine learning platform for research and experimentation, with the following characteristics:

  • · A more intuitive interface: organizes code in a natural way and can apply Python data structures. Quickly traverse small models and small amounts of data.
  • Easier Debugging: Invoke operations directly to inspect running models and test changes. Use standard Python debugging tools to quickly report bugs.
  • · Natural control flow: use Python control flow instead of graph control flow to simplify the configuration of dynamic models.
import tensorflow as tf


def multiply(x, y):
    """Matrix multiplication.
    Note: it requires the input shape of both input to match.
    Args:
        x: tf.Tensor a matrix
        y: tf.Tensor a matrix
    Returns:
        The matrix multiplcation x @ y
    """

    assert x.shape == y.shape
    return tf.matmul(x, y)


def add(x, y):
    """Add two tensors.
    Args:
        x: the left hand operand.
        y: the right hand operand. It should be compatible with x.
    Returns:
        x + y
    """
    return x + y


def main():
    """Main program."""
    A = tf.constant([[1, 2], [3, 4]], dtype=tf.float32)
    x = tf.constant([[0, 10], [0, 0.5]])
    b = tf.constant([[1, -1]], dtype=tf.float32)

    z = multiply(A, x)
    y = add(z, b)
    print(y)


if __name__ == "__main__":
    main()

GradientTape

The tf.GradientTape() function creates a context ("tape") that records all automatic differentiation operations. Every operation performed on a context manager is recorded on a "tape" if at least one input to the context manager is monitorable and is being monitored.

An input is monitorable when:

  • It is a trainable variable created by tf.Variable.
  • · It is being monitored by "tape", which can be achieved by calling the watch method of the tf.Tensor object.
import tensorflow as tf
"""
一旦tf.GradientTape.gradient()被调用,tf.GradientTape对象(即所谓的“磁带”)就会释放它保存的全部资源。
在大多数情况下这是我们想要的,但是有的情况下我们需要多次调用tf.GradientTape.gradient()。
这时,我们需要创建一个持久性的梯度“磁带”,它能够允许多次gradient方法的调用而不释放资源。这种情况下交由开发者负责资源的释放
"""
x = tf.Variable(4.0)
y = tf.Variable(2.0)
with tf.GradientTape(persistent=True) as tape:
    z = x + y
    w = tf.pow(x, 2)
dz_dy = tape.gradient(z, y)
dz_dx = tape.gradient(z, x)
dw_dx = tape.gradient(w, x)
print(dz_dy, dz_dx, dw_dx)
# Release the resources
del tape
import tensorflow as tf

x = tf.constant(4.0)
with tf.GradientTape() as tape:
    tape.watch(x)
    y = tf.pow(x, 2)
# Will compute 8 = 2*x, x = 8
dy_dx = tape.gradient(y, x)
print(dy_dx)

Guess you like

Origin blog.csdn.net/qq_40107571/article/details/131350967