教程 | TensorFlow 1.11 教程 —— 学习使用机器学习 —— 保存和恢复模型(9.14 ver.)

更新至 2018-9-14 版本

译自:官方 TensorFlow 教程

模型进度可以在训练期间和训练之后保存。这意味着模型可以从中断的地方恢复,并避免长时间的训练。保存模型也意味着你可以共享你的模型,而其他人可以重新创建你的工作。在发布研究模型和技术时,大多数机器学习从业者共享:

  • 用于创建模型的代码
  • 模型的训练权重或参数

共享此数据有助于其他人了解模型的工作原理,并使用新数据自行尝试。

保存 TensorFlow 模型有多种方法 —— 取决于你使用的API。本指南使用 tf.keras,一个用于在 TensorFlow 中构建和训练模型的高级 API。有关其他方法,请参阅TensorFlow 保存和还原 指南或 保存在 eager 中


设置

安装和导入

安装并导入 TensorFlow 和依赖项:

!pip install -q h5py pyyaml

获取样本数据集

我们将使用 MNIST 数据集 来训练我们的模型,并演示如何保存权重。我们仅使用前 1000 个样本:

from __future__ import absolute_import, division, print_function

import os

import tensorflow as tf
from tensorflow import keras

tf.__version__
'1.11.0-rc0'
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()

train_labels = train_labels[:1000]
test_labels = test_labels[:1000]

train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
Downloading data from https://s3.amazonaws.com/img-datasets/mnist.npz
11493376/11490434 [==============================] - 2s 0us/step

定义模型

让我们构建一个简单的模型,并用它来演示保存和加载权重。

# 返回一个简单的序列模型
def create_model():
  model = tf.keras.models.Sequential([
    keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(784,)),
    keras.layers.Dropout(0.2),
    keras.layers.Dense(10, activation=tf.nn.softmax)
  ])
  
  model.compile(optimizer=tf.keras.optimizers.Adam(), 
                loss=tf.keras.losses.sparse_categorical_crossentropy,
                metrics=['accuracy'])
  
  return model


# 创建一个基础模型实例
model = create_model()
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 512)               401920    
_________________________________________________________________
dropout (Dropout)            (None, 512)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 10)                5130      
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________

训练时保存检查点

在训练期间和训练结束时自动保存检查点。通过这种方式,你可以使用已训练的模型,而无需重新训练,或者在你中断的地方继续训练 —— 万一训练过程中断。

tf.keras.callbacks.ModelCheckpoint 是执行此任务的回调。回调需要几个参数来配置检查点。

检查点回调用法

训练模型并将 ModelCheckpoint 回调传递给它:

checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)

# 创建检查点回调
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path, 
                                                 save_weights_only=True,
                                                 verbose=1)

model = create_model()

model.fit(train_images, train_labels,  epochs = 10, 
          validation_data = (test_images,test_labels),
          callbacks = [cp_callback])  # 训练时传递回调
Train on 1000 samples, validate on 1000 samples
Epoch 1/10
 800/1000 [=======================>......] - ETA: 0s - loss: 1.2836 - acc: 0.6400
Epoch 00001: saving model to training_1/cp.ckpt
WARNING:tensorflow:This model was compiled with a Keras optimizer (<tensorflow.python.keras.optimizers.Adam object at 0x7f8d67c20198>) but is being saved in TensorFlow format with `save_weights`. The model's weights will be saved, but unlike with TensorFlow optimizers in the TensorFlow format the optimizer's state will not be saved.

...

Consider using a TensorFlow optimizer from tf.train.
1000/1000 [==============================] - 0s 195us/step - loss: 0.0368 - acc: 1.0000 - val_loss: 0.4113 - val_acc: 0.8660

这将创建一个 TensorFlow 检查点文件集合,这些文件在每个周期结束时更新:

!ls {checkpoint_dir}
checkpoint  cp.ckpt.data-00000-of-00001  cp.ckpt.index

创建一个新的未经训练的模型时,如果仅从权重恢复模型,则必须具有与原始模型相同的架构。由于它是相同的模型架构,我们可以共享权重,尽管它是模型的不同实例。

现在重建一个新的未经训练的模型,并在测试集上进行评估(准确度约为10%):

model = create_model()

loss, acc = model.evaluate(test_images, test_labels)
print("Untrained model, accuracy: {:5.2f}%".format(100*acc))
1000/1000 [==============================] - 0s 121us/step
Untrained model, accuracy: 12.50%

然后从检查点加载权重,并重新评估:

model.load_weights(checkpoint_path)
loss,acc = model.evaluate(test_images, test_labels)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
1000/1000 [==============================] - 0s 36us/step
Restored model, accuracy: 86.60%

检查点回调选项

回调提供了几个选项,可以为生成的检查点提供唯一的名称,并调整检查点的频率。

训练一个新模型,每 5 个周期保存一次唯一命名的检查点:

# 在文件名中包含周期。 (使用 `str.format`)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)

cp_callback = tf.keras.callbacks.ModelCheckpoint(
    checkpoint_path, verbose=1, save_weights_only=True,
    # 每 5 个周期保存权重
    period=5)

model = create_model()
model.fit(train_images, train_labels,
          epochs = 50, callbacks = [cp_callback],
          validation_data = (test_images,test_labels),
          verbose=0)
Epoch 00005: saving model to training_2/cp-0005.ckpt
WARNING:tensorflow:This model was compiled with a Keras optimizer (<tensorflow.python.keras.optimizers.Adam object at 0x7f8dc5d86198>) but is being saved in TensorFlow format with `save_weights`. The model's weights will be saved, but unlike with TensorFlow optimizers in the TensorFlow format the optimizer's state will not be saved.

Consider using a TensorFlow optimizer from tf.train.
...

Epoch 00050: saving model to training_2/cp-0050.ckpt
WARNING:tensorflow:This model was compiled with a Keras optimizer (<tensorflow.python.keras.optimizers.Adam object at 0x7f8dc5d86198>) but is being saved in TensorFlow format with `save_weights`. The model's weights will be saved, but unlike with TensorFlow optimizers in the TensorFlow format the optimizer's state will not be saved.

Consider using a TensorFlow optimizer from tf.train.

现在,查看生成的检查点(按修改日期排序):

import pathlib

# 按修改日期排序检查点
checkpoints = pathlib.Path(checkpoint_dir).glob("*.index")
checkpoints = sorted(checkpoints, key=lambda cp:cp.stat().st_mtime)
checkpoints = [cp.with_suffix('') for cp in checkpoints]
latest = str(checkpoints[-1])
checkpoints
[PosixPath('training_2/cp-0005.ckpt'),
 PosixPath('training_2/cp-0010.ckpt'),
 PosixPath('training_2/cp-0015.ckpt'),
 PosixPath('training_2/cp-0020.ckpt'),
 PosixPath('training_2/cp-0025.ckpt'),
 PosixPath('training_2/cp-0030.ckpt'),
 PosixPath('training_2/cp-0035.ckpt'),
 PosixPath('training_2/cp-0040.ckpt'),
 PosixPath('training_2/cp-0045.ckpt'),
 PosixPath('training_2/cp-0050.ckpt')]

注意:默认的 tensorflow 格式仅保存最近的5个检查点。

要进行测试,需要重置模型并加载最新的检查点:

model = create_model()
model.load_weights(latest)
loss, acc = model.evaluate(test_images, test_labels)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
1000/1000 [==============================] - 0s 94us/step
Restored model, accuracy: 87.60%

这些文件是什么?

上述代码将权重存储到 检查点 格式文件的集合中,这些文件仅包含二进制格式的训练权重。

检查点包含:

  • 一个或多个包含模型权重的分片。
  • 索引文件,指示哪些权重存储在哪个分片中。

如果你只在一台机器上训练模型,那么你将有一个带有 .data-00000-of-00001 后缀的分片。


手动保存权重

上文讲述了如何将权重加载到模型中。

手动保存权重同样很简单,需要使用 Model.save_weights 方法。

# 保存权重
model.save_weights('./checkpoints/my_checkpoint')

# 恢复权重
model = create_model()
model.load_weights('./checkpoints/my_checkpoint')

loss,acc = model.evaluate(test_images, test_labels)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
WARNING:tensorflow:This model was compiled with a Keras optimizer (<tensorflow.python.keras.optimizers.Adam object at 0x7f8dc5155748>) but is being saved in TensorFlow format with `save_weights`. The model's weights will be saved, but unlike with TensorFlow optimizers in the TensorFlow format the optimizer's state will not be saved.

Consider using a TensorFlow optimizer from tf.train.
1000/1000 [==============================] - 0s 100us/step
Restored model, accuracy: 87.60%

保存完整模型

可以保存整个模型到文件,包含权重值、模型配置甚至优化器配置。这允许你检查模型并稍后从完全相同的状态恢复培训,而无需访问原始代码。

在 Keras 中保存功能齐全的模型非常有用,你可以在 TensorFlow.js 中加载它们,然后在 Web 浏览器中训练和运行它们。

Keras 使用 HDF5 标准提供基本保存格式。对于我们的模型,可以将其视为单个二进制 blob。

model = create_model()

model.fit(train_images, train_labels, epochs=5)

# 将整个模型保存至 HDF5 文件
model.save('my_model.h5')
Epoch 1/5
1000/1000 [==============================] - 0s 419us/step - loss: 1.1345 - acc: 0.6800
Epoch 2/5
1000/1000 [==============================] - 0s 162us/step - loss: 0.4104 - acc: 0.8880
Epoch 3/5
1000/1000 [==============================] - 0s 159us/step - loss: 0.2768 - acc: 0.9270
Epoch 4/5
1000/1000 [==============================] - 0s 162us/step - loss: 0.2194 - acc: 0.9530
Epoch 5/5
1000/1000 [==============================] - 0s 157us/step - loss: 0.1678 - acc: 0.9600

现在从该文件重新创建模型:

# 重新创建完全相同的模型,包括权重和优化器
new_model = keras.models.load_model('my_model.h5')
new_model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_12 (Dense)             (None, 512)               401920    
_________________________________________________________________
dropout_6 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_13 (Dense)             (None, 10)                5130      
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________

检查其准确率:

loss, acc = new_model.evaluate(test_images, test_labels)
print("Restored model, accuracy: {:5.2f}%".format(100*acc))
1000/1000 [==============================] - 0s 120us/step
Restored model, accuracy: 84.70%

这项技术可以保存所有东西:

  • 权重值
  • 模型的配置(架构)
  • 优化器配置

Keras 通过检查架构来保存模型。目前,它无法保存 TensorFlow 优化器(来自 tf.train)。使用这些时,你需要在加载后重新编译模型,并且你将失去优化器的状态。


下一步可以做什么?

本教程是使用 tf.keras 保存和加载模型的快速指南。

猜你喜欢

转载自blog.csdn.net/qq_20084101/article/details/82422727