TensorFlow2.0教程-保持和读取模型

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq_31456593/article/details/88829202

TensorFlow2.0教程-保持和读取模型

Tensorflow 2.0 教程持续更新https://blog.csdn.net/qq_31456593/article/details/88606284

完整tensorflow2.0教程代码请看tensorflow2.0:中文教程tensorflow2_tutorials_chinese(欢迎star)

入门教程:
TensorFlow 2.0 教程- Keras 快速入门
TensorFlow 2.0 教程-keras 函数api
TensorFlow 2.0 教程-使用keras训练模型
TensorFlow 2.0 教程-用keras构建自己的网络层
TensorFlow 2.0 教程-keras模型保存和序列化

导入数据

(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()

train_labels = train_labels[:1000]
test_labels = test_labels[:1000]

train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0

1.定义一个模型

def create_model():
    model = keras.Sequential([
        keras.layers.Dense(128, activation='relu', input_shape=(784,)),
        keras.layers.Dropout(0.5),
        keras.layers.Dense(10, activation='softmax')
    ])

    model.compile(optimizer='adam',
                 loss=keras.losses.sparse_categorical_crossentropy,
                 metrics=['accuracy'])
    return model
model = create_model()
model.summary()
    
Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_4 (Dense)              (None, 128)               100480    
_________________________________________________________________
dropout_2 (Dropout)          (None, 128)               0         
_________________________________________________________________
dense_5 (Dense)              (None, 10)                1290      
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
_________________________________________________________________

2.checkpoint回调

check_path = '106save/model.ckpt'
check_dir = os.path.dirname(check_path)

cp_callback = tf.keras.callbacks.ModelCheckpoint(check_path, 
                                                 save_weights_only=True, verbose=1)
model = create_model()
model.fit(train_images, train_labels, epochs=10,
         validation_data=(test_images, test_labels),
         callbacks=[cp_callback])
Train on 1000 samples, validate on 1000 samples
Epoch 1/10
 544/1000 [===============>..............] - ETA: 0s - loss: 2.0658 - accuracy: 0.2831 
Epoch 00001: saving model to 106save/model.ckpt
1000/1000 [==============================] - 1s 855us/sample - loss: 1.8036 - accuracy: 0.4190 - val_loss: 1.3101 - val_accuracy: 0.6700
Epoch 2/10
 800/1000 [=======================>......] - ETA: 0s - loss: 1.0327 - accuracy: 0.7125
Epoch 00002: saving model to 106save/model.ckpt
1000/1000 [==============================] - 0s 132us/sample - loss: 1.0101 - accuracy: 0.7190 - val_loss: 0.8742 - val_accuracy: 0.7650
Epoch 3/10
 768/1000 [======================>.......] - ETA: 0s - loss: 0.7168 - accuracy: 0.7865
Epoch 00003: saving model to 106save/model.ckpt
1000/1000 [==============================] - 0s 113us/sample - loss: 0.7214 - accuracy: 0.7900 - val_loss: 0.7212 - val_accuracy: 0.7950
Epoch 4/10
 992/1000 [============================>.] - ETA: 0s - loss: 0.5918 - accuracy: 0.8367
Epoch 00004: saving model to 106save/model.ckpt
1000/1000 [==============================] - 0s 90us/sample - loss: 0.5904 - accuracy: 0.8380 - val_loss: 0.6292 - val_accuracy: 0.8140
Epoch 5/10
 864/1000 [========================>.....] - ETA: 0s - loss: 0.4970 - accuracy: 0.8600
Epoch 00005: saving model to 106save/model.ckpt
1000/1000 [==============================] - 0s 105us/sample - loss: 0.4997 - accuracy: 0.8600 - val_loss: 0.5710 - val_accuracy: 0.8410
Epoch 6/10
 896/1000 [=========================>....] - ETA: 0s - loss: 0.4247 - accuracy: 0.8839
Epoch 00006: saving model to 106save/model.ckpt
1000/1000 [==============================] - 0s 97us/sample - loss: 0.4316 - accuracy: 0.8810 - val_loss: 0.5430 - val_accuracy: 0.8420
Epoch 7/10
  32/1000 [..............................] - ETA: 0s - loss: 0.2628 - accuracy: 0.9688
Epoch 00007: saving model to 106save/model.ckpt
1000/1000 [==============================] - 0s 81us/sample - loss: 0.3724 - accuracy: 0.8930 - val_loss: 0.5041 - val_accuracy: 0.8480
Epoch 8/10
  32/1000 [..............................] - ETA: 0s - loss: 0.2136 - accuracy: 0.9375
Epoch 00008: saving model to 106save/model.ckpt
1000/1000 [==============================] - 0s 75us/sample - loss: 0.3221 - accuracy: 0.9030 - val_loss: 0.4861 - val_accuracy: 0.8510
Epoch 9/10
 960/1000 [===========================>..] - ETA: 0s - loss: 0.3195 - accuracy: 0.9177
Epoch 00009: saving model to 106save/model.ckpt
1000/1000 [==============================] - 0s 108us/sample - loss: 0.3230 - accuracy: 0.9150 - val_loss: 0.4580 - val_accuracy: 0.8600
Epoch 10/10
 704/1000 [====================>.........] - ETA: 0s - loss: 0.2577 - accuracy: 0.9219
Epoch 00010: saving model to 106save/model.ckpt
1000/1000 [==============================] - 0s 128us/sample - loss: 0.2701 - accuracy: 0.9170 - val_loss: 0.4465 - val_accuracy: 0.8620





<tensorflow.python.keras.callbacks.History at 0x7fbcd872fbe0>
!ls {check_dir}
checkpoint  model.ckpt.data-00000-of-00001  model.ckpt.index
model = create_model()

loss, acc = model.evaluate(test_images, test_labels)
print("Untrained model, accuracy: {:5.2f}%".format(100*acc))
1000/1000 [==============================] - 0s 69us/sample - loss: 2.4036 - accuracy: 0.0890
Untrained model, accuracy:  8.90%
model.load_weights(check_path)
loss, acc = model.evaluate(test_images, test_labels)
print("Untrained model, accuracy: {:5.2f}%".format(100*acc))
1000/1000 [==============================] - 0s 47us/sample - loss: 0.4465 - accuracy: 0.8620
Untrained model, accuracy: 86.20%

3.设置checkpoint回调

check_path = '106save02/cp-{epoch:04d}.ckpt'
check_dir = os.path.dirname(check_path)

cp_callback = tf.keras.callbacks.ModelCheckpoint(check_path,save_weights_only=True, 
                                                 verbose=1, period=5)  # 每5
model = create_model()
model.fit(train_images, train_labels, epochs=10,
         validation_data=(test_images, test_labels),
         callbacks=[cp_callback])
Train on 1000 samples, validate on 1000 samples
Epoch 1/10
1000/1000 [==============================] - 1s 1ms/sample - loss: 1.7242 - accuracy: 0.4490 - val_loss: 1.2205 - val_accuracy: 0.6890
Epoch 2/10
1000/1000 [==============================] - 0s 102us/sample - loss: 0.9133 - accuracy: 0.7450 - val_loss: 0.8194 - val_accuracy: 0.7800
Epoch 3/10
1000/1000 [==============================] - 0s 88us/sample - loss: 0.6489 - accuracy: 0.8360 - val_loss: 0.6748 - val_accuracy: 0.8050
Epoch 4/10
1000/1000 [==============================] - 0s 78us/sample - loss: 0.5492 - accuracy: 0.8360 - val_loss: 0.6144 - val_accuracy: 0.8150
Epoch 5/10
  32/1000 [..............................] - ETA: 0s - loss: 0.4468 - accuracy: 0.9062
Epoch 00005: saving model to 106save02/cp-0005.ckpt
1000/1000 [==============================] - 0s 130us/sample - loss: 0.4755 - accuracy: 0.8750 - val_loss: 0.5483 - val_accuracy: 0.8330
Epoch 6/10
1000/1000 [==============================] - 0s 94us/sample - loss: 0.4191 - accuracy: 0.8790 - val_loss: 0.5164 - val_accuracy: 0.8500
Epoch 7/10
1000/1000 [==============================] - 0s 107us/sample - loss: 0.3699 - accuracy: 0.8980 - val_loss: 0.4935 - val_accuracy: 0.8420
Epoch 8/10
1000/1000 [==============================] - 0s 87us/sample - loss: 0.3404 - accuracy: 0.9070 - val_loss: 0.4559 - val_accuracy: 0.8600
Epoch 9/10
1000/1000 [==============================] - 0s 85us/sample - loss: 0.3060 - accuracy: 0.9250 - val_loss: 0.4513 - val_accuracy: 0.8630
Epoch 10/10
 800/1000 [=======================>......] - ETA: 0s - loss: 0.3016 - accuracy: 0.9150
Epoch 00010: saving model to 106save02/cp-0010.ckpt
1000/1000 [==============================] - 0s 120us/sample - loss: 0.2845 - accuracy: 0.9220 - val_loss: 0.4402 - val_accuracy: 0.8580





<tensorflow.python.keras.callbacks.History at 0x7fbc5c911b38>
!ls {check_dir}
checkpoint			  cp-0010.ckpt.data-00000-of-00001
cp-0005.ckpt.data-00000-of-00001  cp-0010.ckpt.index
cp-0005.ckpt.index

载入最新版模型

latest = tf.train.latest_checkpoint(check_dir)
print(latest)

106save02/cp-0010.ckpt
model = create_model()
model.load_weights(latest)
loss, acc = model.evaluate(test_images, test_labels)
print('restored model accuracy: {:5.2f}%'.format(acc*100))
1000/1000 [==============================] - 0s 78us/sample - loss: 0.4402 - accuracy: 0.8580
restored model accuracy: 85.80%

5.手动保持权重

model.save_weights('106save03/manually_model.ckpt')
model = create_model()
model.load_weights('106save03/manually_model.ckpt')
loss, acc = model.evaluate(test_images, test_labels)
print('restored model accuracy: {:5.2f}%'.format(acc*100))
1000/1000 [==============================] - 0s 69us/sample - loss: 0.4402 - accuracy: 0.8580
restored model accuracy: 85.80%

6.保持整个模型

model = create_model()
model.fit(train_images, train_labels, epochs=10,
         validation_data=(test_images, test_labels),
         )
model.save('106save03.h5')
Train on 1000 samples, validate on 1000 samples
Epoch 1/10
1000/1000 [==============================] - 0s 240us/sample - loss: 1.7636 - accuracy: 0.4460 - val_loss: 1.2041 - val_accuracy: 0.7230
Epoch 2/10
1000/1000 [==============================] - 0s 82us/sample - loss: 0.9278 - accuracy: 0.7410 - val_loss: 0.7989 - val_accuracy: 0.7880
Epoch 3/10
1000/1000 [==============================] - 0s 97us/sample - loss: 0.6722 - accuracy: 0.7970 - val_loss: 0.6739 - val_accuracy: 0.8110
Epoch 4/10
1000/1000 [==============================] - 0s 110us/sample - loss: 0.5326 - accuracy: 0.8530 - val_loss: 0.6027 - val_accuracy: 0.8170
Epoch 5/10
1000/1000 [==============================] - 0s 88us/sample - loss: 0.4674 - accuracy: 0.8640 - val_loss: 0.5623 - val_accuracy: 0.8270
Epoch 6/10
1000/1000 [==============================] - 0s 91us/sample - loss: 0.3986 - accuracy: 0.8900 - val_loss: 0.5429 - val_accuracy: 0.8370
Epoch 7/10
1000/1000 [==============================] - 0s 87us/sample - loss: 0.3717 - accuracy: 0.8830 - val_loss: 0.5205 - val_accuracy: 0.8340
Epoch 8/10
1000/1000 [==============================] - 0s 100us/sample - loss: 0.3492 - accuracy: 0.8980 - val_loss: 0.4844 - val_accuracy: 0.8480
Epoch 9/10
1000/1000 [==============================] - 0s 90us/sample - loss: 0.3048 - accuracy: 0.9200 - val_loss: 0.4603 - val_accuracy: 0.8550
Epoch 10/10
1000/1000 [==============================] - 0s 90us/sample - loss: 0.2574 - accuracy: 0.9290 - val_loss: 0.4674 - val_accuracy: 0.8540
new_model = keras.models.load_model('106save03.h5')
new_model.summary()
Model: "sequential_11"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_22 (Dense)             (None, 128)               100480    
_________________________________________________________________
dropout_11 (Dropout)         (None, 128)               0         
_________________________________________________________________
dense_23 (Dense)             (None, 10)                1290      
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
_________________________________________________________________
loss, acc = model.evaluate(test_images, test_labels)
print('restored model accuracy: {:5.2f}%'.format(acc*100))
1000/1000 [==============================] - 1s 810us/sample - loss: 0.4674 - accuracy: 0.8540
restored model accuracy: 85.40%

7.其他导出模型的方法

import time
saved_model_path = "./saved_models/{}".format(int(time.time()))

tf.keras.experimental.export_saved_model(model, saved_model_path)
saved_model_path
'./saved_models/1553601639'
new_model = tf.keras.experimental.load_from_saved_model(saved_model_path)
new_model.summary()
Model: "sequential_11"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_22 (Dense)             (None, 128)               100480    
_________________________________________________________________
dropout_11 (Dropout)         (None, 128)               0         
_________________________________________________________________
dense_23 (Dense)             (None, 10)                1290      
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
_________________________________________________________________
# 该方法必须先运行compile函数
new_model.compile(optimizer=model.optimizer,  # keep the optimizer that was loaded
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

# Evaluate the restored model.
loss, acc = new_model.evaluate(test_images, test_labels)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
1000/1000 [==============================] - 0s 131us/sample - loss: 0.4674 - accuracy: 0.8540
Restored model, accuracy: 85.40%

猜你喜欢

转载自blog.csdn.net/qq_31456593/article/details/88829202