5.模型的保存和恢复

这里我们使用TensorFlow的关于MNIST的数据集的前1000张图片来进行模型的训练和测试.

一.准备

1.1 得到数据集

下载数据集的代码:且我们只取得前面1000个样本.并且都除以255进行归一化处理.

from __future__ import absolute_import,division,print_function
import os

import tensorflow as tf
from tensorflow import keras

(train_images,train_labels),(test_images,test_labels)=tf.keras.datasets.mnist.load_data()

train_labels=train_labels[:1000]
test_labels=test_labels[:1000]

print(train_images.shape)
print(test_images.shape)
train_images=train_images[:1000].reshape(-1,28*28)/255.0

test_images=test_images[:1000].reshape(-1,28*28)/255.0
print(test_images.shape)
print(test_images.shape)

结果:

11493376/11490434 [==============================] - 14s 1us/step
(60000, 28, 28)
(10000, 28, 28)
(1000, 784)
(1000, 784)

1.2 定义一个模型

我们这里训练的模型只有三层,最后一层通过softmax输出对于每一个样本预测的概率值.

def create_model():
    model=tf.keras.models.Sequential([
        keras.layers.Dense(512,activation=tf.nn.relu,input_shape=(784,)),
        keras.layers.Dropout(0.2),
        keras.layers.Dense(10,activation=tf.nn.softmax)
    ])
    model.compile(optimizer=tf.keras.optimizers.Adam(),
                  loss=tf.keras.losses.sparse_categorical_crossentropy,
                  metrics=['accuracy'])
    return model

model=create_model()
model.summary()

结果:这里的第一层是全连接层,所以参数的个数是512*(784+1)=401920,第二层的参数是0个,...

Layer (type)                 Output Shape              Param #   
=================================================================
dense_1 (Dense)              (None, 512)               401920    
_________________________________________________________________
dropout_1 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 10)                5130      
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________

二.在训练过程中保存检查点

使用tf.keras.callbacks.ModelCheckPoint这个回调函数来实现检查点机制,需要配置以下的参数.

2.1 检查点回调函数的使用

先训练模型,然后将其传给ModelCheckpoint回调函数:

在这里直接运行代码的时候会出现错误:ImportError: `save_weights` requires h5py.所以我先使用pip install h5py,但是提示我先安装cython,所以我先pip install cython.发现可以成功的import h5py.


checkpoint_path='./cp.ckpt'
checkpoint_dir=os.path.dirname(checkpoint_path)

cp_callback=tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
                                               save_weights_only=True,
                                               verbose=1)

model=create_model()
model.fit(train_images,train_labels,epochs=10,
          validation_data=(test_images,test_labels),
          callbacks=[cp_callback])

结果展示:

Epoch 8/10
  32/1000 [..............................] - ETA: 0s - loss: 0.1069 - acc: 1.0000
 160/1000 [===>..........................] - ETA: 0s - loss: 0.0643 - acc: 1.0000
 288/1000 [=======>......................] - ETA: 0s - loss: 0.0758 - acc: 0.9861
 384/1000 [==========>...................] - ETA: 0s - loss: 0.0713 - acc: 0.9896
 512/1000 [==============>...............] - ETA: 0s - loss: 0.0659 - acc: 0.9922
 640/1000 [==================>...........] - ETA: 0s - loss: 0.0624 - acc: 0.9938
 768/1000 [======================>.......] - ETA: 0s - loss: 0.0634 - acc: 0.9922
 896/1000 [=========================>....] - ETA: 0s - loss: 0.0640 - acc: 0.9922
1000/1000 [==============================] - 1s 538us/step - loss: 0.0670 - acc: 0.9900 - val_loss: 0.4627 - val_acc: 0.8560

Epoch 00008: saving model to ./cp.ckpt
Epoch 9/10
  32/1000 [..............................] - ETA: 0s - loss: 0.0356 - acc: 1.0000
 160/1000 [===>..........................] - ETA: 0s - loss: 0.0404 - acc: 1.0000
 288/1000 [=======>......................] - ETA: 0s - loss: 0.0530 - acc: 0.9931
 416/1000 [===========>..................] - ETA: 0s - loss: 0.0651 - acc: 0.9880
 512/1000 [==============>...............] - ETA: 0s - loss: 0.0601 - acc: 0.9902
 640/1000 [==================>...........] - ETA: 0s - loss: 0.0588 - acc: 0.9906
 768/1000 [======================>.......] - ETA: 0s - loss: 0.0593 - acc: 0.9909
 896/1000 [=========================>....] - ETA: 0s - loss: 0.0573 - acc: 0.9911
1000/1000 [==============================] - 1s 562us/step - loss: 0.0577 - acc: 0.9910 - val_loss: 0.4294 - val_acc: 0.8660

Epoch 00009: saving model to ./cp.ckpt
Epoch 10/10
  32/1000 [..............................] - ETA: 0s - loss: 0.0392 - acc: 1.0000
 160/1000 [===>..........................] - ETA: 0s - loss: 0.0441 - acc: 1.0000
 288/1000 [=======>......................] - ETA: 0s - loss: 0.0442 - acc: 1.0000
 416/1000 [===========>..................] - ETA: 0s - loss: 0.0420 - acc: 1.0000
 544/1000 [===============>..............] - ETA: 0s - loss: 0.0402 - acc: 1.0000
 672/1000 [===================>..........] - ETA: 0s - loss: 0.0403 - acc: 0.9985
 800/1000 [=======================>......] - ETA: 0s - loss: 0.0414 - acc: 0.9975
 928/1000 [==========================>...] - ETA: 0s - loss: 0.0414 - acc: 0.9978
1000/1000 [==============================] - 1s 539us/step - loss: 0.0409 - acc: 0.9980 - val_loss: 0.4050 - val_acc: 0.8660

Epoch 00010: saving model to ./cp.ckpt

只展示了其中一部分,我保存在当前的工作目录下,所以你会发现当前的工作目录下出现了文件cp.ckpt.

这个产生了一个检查点的文件夹,但是这个文件夹在每次迭代的时候都会更新.

接着我们创造一个新的未训练的模型,需要从参数权重上恢复一个模型,你需要有一个跟原模型一样结构的模型.

代码:这里你可以把之前的检查点机制的代码注释掉,

model=create_model()
loss,acc=model.evaluate(test_images,test_labels)
print("untrained model,accurcy:{:5.2f}%".format(100*acc))

结果:未训练的模型的准确率非常的低

  32/1000 [..............................] - ETA: 2s
 640/1000 [==================>...........] - ETA: 0s
1000/1000 [==============================] - 0s 164us/step
untrained model,accurcy: 9.10%

但是一旦你从之前的检查点的文档中恢复权重的数据,在进行测试集的预测,发现准确率立马升高了.

model=create_model()
model.load_weights(checkpoint_path)
loss,acc=model.evaluate(test_images,test_labels)
print("Restored model,accurcy:{:5.2f}%".format(100*acc))

结果展示:

  32/1000 [..............................] - ETA: 1s
 608/1000 [=================>............] - ETA: 0s
1000/1000 [==============================] - 0s 127us/step
Restored model,accurcy:86.60%

2.2设置检查点回调函数的选项

回调函数提供了几个选项来给结果的检查点起个特殊的名字,并且调节检查点的频率.

下面,我来训练一个新的模型,并且每5个迭代来保存一个特殊名字的检查点.

代码:第一行就是一种命名方式

checkpoint_path='./cp-{epoch:04d}.ckpt'
checkpoint_dir=os.path.dirname(checkpoint_path)

cp_callback=tf.keras.callbacks.ModelCheckpoint(
    checkpoint_path,
    verbose=1,
    save_weights_only=True,
    period=5
)
model=create_model()
model.fit(train_images,train_labels,
          epochs=50,
          callbacks=[cp_callback],
          validation_data=(test_images,test_labels),
          verbose=0)

结果:下面我的当前工作的文件夹下多了以下一个文件

Epoch 00005: saving model to ./cp-0005.ckpt

Epoch 00010: saving model to ./cp-0010.ckpt

Epoch 00015: saving model to ./cp-0015.ckpt

Epoch 00020: saving model to ./cp-0020.ckpt

Epoch 00025: saving model to ./cp-0025.ckpt

Epoch 00030: saving model to ./cp-0030.ckpt

Epoch 00035: saving model to ./cp-0035.ckpt

Epoch 00040: saving model to ./cp-0040.ckpt

Epoch 00045: saving model to ./cp-0045.ckpt

Epoch 00050: saving model to ./cp-0050.ckpt

代码:

import pathlib

# Sort the checkpoints by modification time.
checkpoints = pathlib.Path(checkpoint_dir).glob("*.ckpt")
checkpoints = sorted(checkpoints, key=lambda cp:cp.stat().st_mtime)
# checkpoints = [cp.with_suffix('') for cp in checkpoints]
latest = str(checkpoints[-1])
print(checkpoints)

结果展示:虽然在官网上说默认TensorFlow只会保存最近的5个检查点,但是我这里还是都出现了.不知道为什么..

[PosixPath('cp-0005.ckpt'), PosixPath('cp-0010.ckpt'), PosixPath('cp-0015.ckpt'), PosixPath('cp-0020.ckpt'), PosixPath('cp-0025.ckpt'), PosixPath('cp-0030.ckpt'), PosixPath('cp-0035.ckpt'), PosixPath('cp-0040.ckpt'), PosixPath('cp-0045.ckpt'), PosixPath('cp-0050.ckpt')]

最后我们使用最后五个检查点保存的文件进行参数的恢复,并且在测试集上运行:


model=create_model()
model.load_weights(latest)
loss,acc=model.evaluate(test_images,test_labels,batch_size=20,verbose=0)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))

结果展示:

Restored model, accuracy: 87.80%

三.这些文件是什么

这些文件其实是将权重以二进制的形式存储的,检查点包含一个或者多个碎片,每一个碎片包含权重,下标文件指明了哪个权重存放在哪个碎片里面.当然如果你只是在一个机器上训练一个模型,那么你将只有一个碎片后缀是.data-00000-of-00001.

四.手动的保存权重

手动的保存权重也很简单,使用Model.save_weights()

代码:注释掉其他不要的代码

model=create_model()
model.fit(train_images,train_labels,
          validation_data=(test_images,test_labels))
model.save_weights('./my_checkpoint')
model=create_model()
model.load_weights('./my_checkpoint')
loss,acc=model.evaluate(test_images,test_labels)
print("Restored model,accuracy:{:5.2f}%".format(100*acc))

结果:

1000/1000 [==============================] - 0s 129us/step
Restored model,accuracy:77.40%

五.保存整个模型

你也可以保存整个模型,包含权重参数,模型的配置,甚至是优化器的配置.这样你完全不需要之前的代码就可以恢复一个训练模型.

keras提供基本的保存形式使用的是HDF5标准,保存的模型就像是一个单二进制大文件.

代码:


model=create_model()
model.fit(train_images,train_labels,epochs=5)
model.save('my_model.h5')

结果:

Epoch 1/5
  32/1000 [..............................] - ETA: 8s - loss: 2.3919 - acc: 0.0312
 160/1000 [===>..........................] - ETA: 1s - loss: 2.1324 - acc: 0.2750
 288/1000 [=======>......................] - ETA: 0s - loss: 1.8577 - acc: 0.4340
 416/1000 [===========>..................] - ETA: 0s - loss: 1.6522 - acc: 0.5240
 544/1000 [===============>..............] - ETA: 0s - loss: 1.5302 - acc: 0.5607
 672/1000 [===================>..........] - ETA: 0s - loss: 1.3977 - acc: 0.5952
 832/1000 [=======================>......] - ETA: 0s - loss: 1.2657 - acc: 0.6346
 960/1000 [===========================>..] - ETA: 0s - loss: 1.1927 - acc: 0.6573
1000/1000 [==============================] - 1s 689us/step - loss: 1.1757 - acc: 0.6630
Epoch 2/5
  32/1000 [..............................] - ETA: 0s - loss: 0.6902 - acc: 0.7812
 160/1000 [===>..........................] - ETA: 0s - loss: 0.5670 - acc: 0.8625
 288/1000 [=======>......................] - ETA: 0s - loss: 0.4712 - acc: 0.8924
 416/1000 [===========>..................] - ETA: 0s - loss: 0.4564 - acc: 0.8846
 544/1000 [===============>..............] - ETA: 0s - loss: 0.4474 - acc: 0.8824
 672/1000 [===================>..........] - ETA: 0s - loss: 0.4319 - acc: 0.8824
 800/1000 [=======================>......] - ETA: 0s - loss: 0.4490 - acc: 0.8762
 928/1000 [==========================>...] - ETA: 0s - loss: 0.4436 - acc: 0.8728
1000/1000 [==============================] - 0s 443us/step - loss: 0.4359 - acc: 0.8770
Epoch 3/5
  32/1000 [..............................] - ETA: 0s - loss: 0.2507 - acc: 0.9688
 160/1000 [===>..........................] - ETA: 0s - loss: 0.2609 - acc: 0.9437
 288/1000 [=======>......................] - ETA: 0s - loss: 0.2888 - acc: 0.9410
 416/1000 [===========>..................] - ETA: 0s - loss: 0.2916 - acc: 0.9375
 544/1000 [===============>..............] - ETA: 0s - loss: 0.2869 - acc: 0.9357
 672/1000 [===================>..........] - ETA: 0s - loss: 0.2711 - acc: 0.9345
 800/1000 [=======================>......] - ETA: 0s - loss: 0.2815 - acc: 0.9313
 928/1000 [==========================>...] - ETA: 0s - loss: 0.2956 - acc: 0.9256
1000/1000 [==============================] - 0s 450us/step - loss: 0.2966 - acc: 0.9220
Epoch 4/5
  32/1000 [..............................] - ETA: 0s - loss: 0.1000 - acc: 1.0000
 160/1000 [===>..........................] - ETA: 0s - loss: 0.2040 - acc: 0.9750
 320/1000 [========>.....................] - ETA: 0s - loss: 0.2031 - acc: 0.9594
 448/1000 [============>.................] - ETA: 0s - loss: 0.2248 - acc: 0.9531
 576/1000 [================>.............] - ETA: 0s - loss: 0.2187 - acc: 0.9497
 704/1000 [====================>.........] - ETA: 0s - loss: 0.2307 - acc: 0.9446
 832/1000 [=======================>......] - ETA: 0s - loss: 0.2142 - acc: 0.9495
 960/1000 [===========================>..] - ETA: 0s - loss: 0.2067 - acc: 0.9510
1000/1000 [==============================] - 0s 448us/step - loss: 0.2080 - acc: 0.9510
Epoch 5/5
  32/1000 [..............................] - ETA: 0s - loss: 0.1635 - acc: 0.9688
 160/1000 [===>..........................] - ETA: 0s - loss: 0.1373 - acc: 0.9750
 288/1000 [=======>......................] - ETA: 0s - loss: 0.1430 - acc: 0.9722
 416/1000 [===========>..................] - ETA: 0s - loss: 0.1633 - acc: 0.9712
 544/1000 [===============>..............] - ETA: 0s - loss: 0.1425 - acc: 0.9761
 672/1000 [===================>..........] - ETA: 0s - loss: 0.1425 - acc: 0.9747
 800/1000 [=======================>......] - ETA: 0s - loss: 0.1479 - acc: 0.9688
 928/1000 [==========================>...] - ETA: 0s - loss: 0.1558 - acc: 0.9601
1000/1000 [==============================] - 0s 455us/step - loss: 0.1532 - acc: 0.9620

创建新的模型,然后从保存的h5文件恢复整个模型:你会发现参数配置跟之前一样的:

代码:先运行被注释掉的部分,得到h5文件,之后在运行下面的代码.或者直接跑这个代码也行.

# model=create_model()
# model.fit(train_images,train_labels,epochs=5)
# model.save('my_model.h5')

new_model=keras.models.load_model("my_model.h5")
new_model.summary()

loss,accu=new_model.evaluate(test_images,test_labels)
print("Restored model,accuracy:{:5.2f}".format(100*accu))

结果:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_1 (Dense)              (None, 512)               401920    
_________________________________________________________________
dropout_1 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 10)                5130      
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________
  32/1000 [..............................] - ETA: 1s
 576/1000 [================>.............] - ETA: 0s
1000/1000 [==============================] - 0s 136us/step
Restored model,accuracy:86.80

keras保存模型来观察结构,但是现在的tf.train是无法保存优化器的.

六.下一步怎么办

接下来,我们学习官网的guide下面的一些关于keras还有模型存储和恢复的知识点.

猜你喜欢

转载自blog.csdn.net/m0_37393514/article/details/81126242