keras从入门到放弃(十四)模型的保存

keras从入门到放弃(十四)模型的保存
原创润森 最后发布于2019-04-03 22:30:20 阅读数 241 收藏
展开

零基础学Python
零基础学Python
润森
¥9.90
去订阅
今天来探讨模型的保存

Keras使用HDF5文件系统来保存模型。模型保存的方法很容易,只需要使用save()方法即可。

上次训练好了手写数字识别数据集,今天学会如何保存数据集

保存/加载整个模型
Keras 使用了 h5py Python 包。

h5py 是 Keras 的依赖项,应默认被安装

使用 model.save(filepath) 将 Keras 模型保存到单个 HDF5 文件中。

import h5py
model.save('first_model_save.h5')
1
2
这时会生成一个h5文件

新建一个demo.ipython 文件去加载

import keras
from keras.models import load_model
load_model = load_model('first_model_save.h5')
load_model.summary()

Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 26, 26, 64) 640
_________________________________________________________________
conv2d_2 (Conv2D) (None, 24, 24, 64) 36928
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 12, 12, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 9216) 0
_________________________________________________________________
dense_1 (Dense) (None, 256) 2359552
_________________________________________________________________
dropout_1 (Dropout) (None, 256) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 2570
=================================================================
Total params: 2,399,690
Trainable params: 2,399,690
Non-trainable params: 0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
可以看见model的summary一样

那测试一下数据,看下准确率

from keras import layers
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from keras.datasets import mnist
(train_image,train_label),(test_image,test_label) = mnist.load_data()
train_image= np.expand_dims(train_image,axis=-1)
test_image= np.expand_dims(test_image,axis=-1)
load_model.evaluate(test_image,test_label)
OUT:
10000/10000 [==============================] - 15s 2ms/step
[0.038715873086192, 0.9878]
1
2
3
4
5
6
7
8
9
10
11
12
达到了0.9878

保存json文件:
保存模型的结构,而非其权重或训练配置项:

json_string = model.to_json()
这次保存成json 格式,还是上面的模型

my_model_json= model.to_json()
my_model_json
# 保存
with open('my_model.json','w')as f:
f.write(my_model_json)
OUT:
'{"class_name": "Sequential", "config": {"name": "sequential_1", "layers":
[{"class_name": "Conv2D", "config": {"name": "conv2d_1", "trainable": true, "batch_input_shape": [null, 28, 28, 1], "dtype": "float32", "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "valid", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "relu", "use_bias": true, "kernel_initializer":
{"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer":
{"class_name": "Zeros", "config": {}},"kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}},
{"class_name": "Conv2D", "config": {"name": "conv2d_2", "trainable": true, "filters": 64, "kernel_size": [3, 3], "strides": [1, 1], "padding": "valid", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "relu", "use_bias": true, "kernel_initializer":
{"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer":
{"class_name": "Zeros", "config": {}},"kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}},
{"class_name": "MaxPooling2D", "config": {"name": "max_pooling2d_1", "trainable": true, "pool_size": [2, 2], "padding": "valid", "strides": [2, 2], "data_format": "channels_last"}},
{"class_name": "Flatten", "config": {"name": "flatten_1", "trainable": true, "data_format": "channels_last"}},
{"class_name": "Dense", "config": {"name": "dense_1", "trainable": true, "units": 256, "activation": "relu", "use_bias": true, "kernel_initializer":
{"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer":
{"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}},
{"class_name": "Dropout", "config": {"name": "dropout_1", "trainable": true, "rate": 0.5, "noise_shape": null, "seed": null}},
{"class_name": "Dense", "config": {"name": "dense_2", "trainable": true, "units": 10, "activation": "softmax", "use_bias": true, "kernel_initializer":
{"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer":
{"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}]}, "keras_version": "2.2.4", "backend": "tensorflow"}'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
加载json的方法

from keras.models import model_from_json

model = model_from_json(json_string)

from keras.models import model_from_json
with open('my_model.json','r') as f:
my_model_json = f.read()
model = model_from_json(my_model_json)
model.summary()

_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 26, 26, 64) 640
_________________________________________________________________
conv2d_2 (Conv2D) (None, 24, 24, 64) 36928
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 12, 12, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 9216) 0
_________________________________________________________________
dense_1 (Dense) (None, 256) 2359552
_________________________________________________________________
dropout_1 (Dropout) (None, 256) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 2570
=================================================================
Total params: 2,399,690
Trainable params: 2,399,690
Non-trainable params: 0
_________________________________________________________________
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
对于model的结构没有变化

输入model.evaluate(test_image,test_label) 会报错,因为没有编译

那我马上编译

model.compile(optimizer='adam',loss ='sparse_categorical_crossentropy',metrics=['acc'])
model.evaluate(test_image,test_label)

OUT:
[12.63102478942871, 0.1118]
1
2
3
4
5
准确率只有0.1118,这是啥回事

因为这个模型没有加载前面的权重,这个数值其实是model自己猜的

其实还可以保存成只保存模型的权重:

model.save_weights('my_model_weights.h5')

model.save_weights('first_model_save_weights.h5')
1
然后再demo.ipython

model.load_weights('first_model_save_weights.h5')
model.compile(optimizer='adam',loss ='sparse_categorical_crossentropy',metrics=['acc'])
model.evaluate(test_image,test_label)
OUT:
[0.038715873086192, 0.9878]
1
2
3
4
5
准确度恢复了

有没有其他方法只加载部分的权重呢?

在一开始编译的时候,添加name的参数,是不是和前端的form 标签 一样

model.add(layers.Conv2D(64,(3,3),activation='relu',input_shape=(28,28,1)),name='conv_1')
model.add(layers.Conv2D(64,(3,3),activation='relu'),name='conv_2')
1
2
然后再demo.ipython 加载,只需要添加by_name = True就可以了

model.load_weights('first_model_save_weights.h5',by_name=True)

1
2
总结;

model.save保存成h5文件时将模型所有的内容都保存下来

model = model_from_json(json_string)保存json 只有模型的结构

model.save_weights只保存模型的权重
————————————————
版权声明:本文为CSDN博主「润森」的原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/weixin_44510615/article/details/89006543

猜你喜欢

转载自www.cnblogs.com/zb-ml/p/12681148.html