Keras (9) tf.function function conversion, @tf.function function conversion

This article will introduce the following:

  • Use tf.function and AutoGraph to improve code performance
  • Use @tf.function for function conversion
  • Show the converted code of tf.function
  • tf.Variable cannot be defined inside a function

1. Use tf.function and AutoGraph to improve code performance

1. To facilitate testing, customize the activation function
# 使用TF2的内置方法tf.function(),自定义scaled_elu激活函数
# tf.function and auto-graph.
def scaled_elu(z, scale=1.0, alpha=1.0):
    # z >= 0 ? scale * z : scale * alpha * tf.nn.elu(z)
    is_positive = tf.greater_equal(z, 0.0)
    return scale * tf.where(is_positive, z, alpha * tf.nn.elu(z))

print(scaled_elu(tf.constant(-3.)))
print(scaled_elu(tf.constant([-3., -2.5])))

# ----output----------
tf.Tensor(-0.95021296, shape=(), dtype=float32)
tf.Tensor([-0.95021296 -0.917915  ], shape=(2,), dtype=float32)
2. Use tf.function function to convert python function into tf function
scaled_elu_tf = tf.function(scaled_elu)
print(scaled_elu_tf(tf.constant(-3.)))
print(scaled_elu_tf(tf.constant([-3., -2.5])))

#---output------
tf.Tensor(-0.95021296, shape=(), dtype=float32)
tf.Tensor([-0.95021296 -0.917915  ], shape=(2,), dtype=float32)
3. Return the original python function that has been converted to tf function
print(scaled_elu_tf.python_function is scaled_elu)

#---output------
True
4. Compare the performance of the python function and the function after TF conversion
%timeit scaled_elu(tf.random.normal((1000, 1000)))
%timeit scaled_elu_tf(tf.random.normal((1000, 1000)))

#---output------
745 µs ± 38.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
483 µs ± 39.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Second, use @tf.function for function conversion

tf.functionSimilar to functions, you can also use the decorator @tf.function for function conversion

# 1 + 1/2 + 1/2^2 + ... + 1/2^n

@tf.function
def converge_to_2(n_iters):
    total = tf.constant(0.)
    increment = tf.constant(1.)
    for _ in range(n_iters):
        total += increment
        increment /= 2.0
    return total

print(converge_to_2(20))

#---output------
tf.Tensor(1.9999981, shape=(), dtype=float32)

Three, show the converted code of tf.function

# 7,展示tf.function转换后的代码
def display_tf_code(func):
    code = tf.autograph.to_code(func)
    from IPython.display import display, Markdown
    display(Markdown('```python\n{}\n```'.format(code)))
display_tf_code(converge_to_2)

#---output------
<IPython.core.display.Markdown object>

Fourth, tf.Variable cannot be defined inside the TF function

tf.Variable is a slightly special operation, because there is no way to determine how many times the function is called when constructing the graph, and tf.Variable will only be created once, so there is a conflict, so tf.Variable has @tf.function It can only be placed outside.
Remove @tf.function and put Variable in the function, then the return value should be the same.

import tensorflow as tf 

var = tf.Variable(0.)

@tf.function
def add_21():
    return var.assign_add(21) # += 

print(add_21())

#---output--------
tf.Tensor(21.0, shape=(), dtype=float32)

Five, pay attention

  • tf.function does not affect the output type.
  • If the function passed in display_tf_code() is marked with @tf.function, an error will be reported. The input of to_code function is module, class, method, function, traceback, frame, or code object. It cannot be a tf function.
  • If you use tf.function, you can only use tensorflow operations. Because it will optimize the entire function into a tensorflow graph, it cannot be optimized for other operations.

Guess you like

Origin blog.csdn.net/TFATS/article/details/110531576