The best model keras saving training

 

Most deep learning model takes a long time, if a training process is unexpectedly interrupted, then the follow-up time to run again wasted a lot of time. This time exercise, we use Keras checkpoint depth learning model training process model, my understanding is to check the training process to save a good model down. If the training process is unexpectedly interrupted, then we can load a file recently, continued training, so before run-off can be ignored.

So how do checkpoint, through exercises to understand.

  • Data: Pima diabete data
  • Neural network topology: 8-12-8-1

1. Check the lifting effect

If the neural network in the training process, the training effect has improved, then save the parameters of the model training times down.

代码:

# -*- coding: utf-8 -*- # Checkpoint NN model imporvements from keras.models import Sequential from keras.layers import Dense from keras.callbacks import ModelCheckpoint import numpy as np import urllib url = "http://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data" raw_data = urllib.urlopen(url) dataset = np.loadtxt(raw_data, delimiter=",") X = dataset[:, 0:8] y = dataset[:, 8] seed = 42 np.random.seed(seed) # create model model = Sequential() model.add(Dense(12, input_dim=8, init='uniform', activation='relu')) model.add(Dense(8, init='uniform', activation='relu')) model.add(Dense(1, init='uniform', activation='sigmoid')) # compile model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # checkpoint filepath = "weights-improvement-{epoch:02d}-{val_acc:.2f}.hdf5" # 中途训练效果提升, 则将文件保存, 每提升一次, 保存一次 checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max') callbacks_list = [checkpoint] # Fit model.fit(X, y, validation_split=0.33, nb_epoch=150, batch_size=10, callbacks=callbacks_list, verbose=0) 

部分结果:

Epoch 00139: val_acc did not improve
Epoch 00140: val_acc improved from 0.70472 to 0.71654, saving model to weights-improvement-140-0.72.hdf5 Epoch 00141: val_acc did not improve Epoch 00142: val_acc did not improve Epoch 00143: val_acc did not improve Epoch 00144: val_acc did not improve Epoch 00145: val_acc did not improve Epoch 00146: val_acc did not improve Epoch 00147: val_acc did not improve Epoch 00148: val_acc did not improve Epoch 00149: val_acc did not improve 

In the local file folder to run the program, we found a lot of performance improvements, the program automatically saved hdf5 file.

Transfer: https: //anifacc.github.io/deeplearning/machinelearning/python/2017/08/30/dlwp-ch14-keep-best-model-checkpoint/

Most deep learning model takes a long time, if a training process is unexpectedly interrupted, then the follow-up time to run again wasted a lot of time. This time exercise, we use Keras checkpoint depth learning model training process model, my understanding is to check the training process to save a good model down. If the training process is unexpectedly interrupted, then we can load a file recently, continued training, so before run-off can be ignored.

So how do checkpoint, through exercises to understand.

  • Data: Pima diabete data
  • Neural network topology: 8-12-8-1

1. Check the lifting effect

If the neural network in the training process, the training effect has improved, then save the parameters of the model training times down.

代码:

# -*- coding: utf-8 -*- # Checkpoint NN model imporvements from keras.models import Sequential from keras.layers import Dense from keras.callbacks import ModelCheckpoint import numpy as np import urllib url = "http://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data" raw_data = urllib.urlopen(url) dataset = np.loadtxt(raw_data, delimiter=",") X = dataset[:, 0:8] y = dataset[:, 8] seed = 42 np.random.seed(seed) # create model model = Sequential() model.add(Dense(12, input_dim=8, init='uniform', activation='relu')) model.add(Dense(8, init='uniform', activation='relu')) model.add(Dense(1, init='uniform', activation='sigmoid')) # compile model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # checkpoint filepath = "weights-improvement-{epoch:02d}-{val_acc:.2f}.hdf5" # 中途训练效果提升, 则将文件保存, 每提升一次, 保存一次 checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max') callbacks_list = [checkpoint] # Fit model.fit(X, y, validation_split=0.33, nb_epoch=150, batch_size=10, callbacks=callbacks_list, verbose=0) 

部分结果:

Epoch 00139: val_acc did not improve
Epoch 00140: val_acc improved from 0.70472 to 0.71654, saving model to weights-improvement-140-0.72.hdf5 Epoch 00141: val_acc did not improve Epoch 00142: val_acc did not improve Epoch 00143: val_acc did not improve Epoch 00144: val_acc did not improve Epoch 00145: val_acc did not improve Epoch 00146: val_acc did not improve Epoch 00147: val_acc did not improve Epoch 00148: val_acc did not improve Epoch 00149: val_acc did not improve 

In the local file folder to run the program, we found a lot of performance improvements, the program automatically saved hdf5 file.

Transfer: https: //anifacc.github.io/deeplearning/machinelearning/python/2017/08/30/dlwp-ch14-keep-best-model-checkpoint/

Transfer: https: //anifacc.github.io/deeplearning/machinelearning/python/2017/08/30/dlwp-ch14-keep-best-model-checkpoint/

Guess you like

Origin www.cnblogs.com/ylHe/p/11753944.html