[Artificial Intelligence] Section III notes: TensorFlow 2.0 immediate execution mode and JIT compilation mode

TensorFlow 2.0 default mode is performed immediately. Than before must first build a model structure diagram, see the results after the execution. In this mode, at the same time it can be constructed corresponding to the output result of FIG simplifies the debugging process. You can also enter data directly input, without first configuration variables. However, after a preliminary test, this mode will reduce the efficiency of the implementation, it is recommended to use only when debugging.

Before the method by adding @ tf.function identification mode is switched to the JIT compiler, the default mode is the mode 1.0, with high efficiency, in a production environment.

Here is the test code:

import tensorflow as tf
import numpy as np
import time
import shutil
import os

class MyModel(tf.keras.Model):

    def __init__(self,units):
        super(MyModel, self).__init__(self)
        self.dense=tf.keras.layers.Dense(units,activation=None)

    def call(self, input_data):
        # print('input_data',input_data)
        output = self.dense(input_data)
        return output

print(tf.__version__)
# 定义模型
my_model1=MyModel(3)
my_model2=MyModel(1)
losses = tf.keras.losses.MeanAbsoluteError()
optimizer = tf.keras.optimizers.Adadelta(learning_rate=1)

# 使用 @tf.function 标识,进行JIT编译,执行效率高
# 去掉 @tf.function 标识为即刻模式,可用于调试,执行效率较低
@tf.function
def train(input_data,target_data):
    with tf.GradientTape() as tape:
        # print('input_data',input_data.shape)
        prediction = my_model1(input_data)
        prediction = my_model2(prediction)
        # tf.print('prediction1', prediction)
        loss = losses(prediction, target_data)
        # 记录日志,会影响效率
        tf.summary.scalar('loss', loss, step=optimizer.iterations)
    variables = my_model1.trainable_variables + my_model2.trainable_variables
    gradients = tape.gradient(loss, variables)
    optimizer.apply_gradients(zip(gradients, variables))

# 记录日志,会影响效率
if os.path.exists('./tmp/summaries'):
    shutil.rmtree('./tmp/summaries')
summary_writer = tf.summary.create_file_writer('./tmp/summaries')
# 打印执行时间
start = time.process_time()
with summary_writer.as_default():
    for i in range(500):
        input_data = np.array([[1.,1.]])
        target_data = np.array([[2.]])
        train(input_data,target_data)
elapsed = (time.process_time() - start)
print("运行时间:",elapsed)
# 保存模型权重
my_model1.save_weights('./tmp/save_models1.h5')
my_model2.save_weights('./tmp/save_models2.h5')
# 加载模型权重
my_model1.load_weights('./tmp/save_models1.h5')
my_model2.load_weights('./tmp/save_models2.h5')
# 下面不在 @tf.function 标识的方法内,执行为即刻模式
print('识别')
input_data = np.array([[1.,1.]])
prediction = my_model1(input_data)
prediction = my_model2(prediction)
print('prediction2', prediction.numpy())

 

Published 28 original articles · won praise 2 · views 10000 +

Guess you like

Origin blog.csdn.net/highlevels/article/details/99942245