keras探索:regression-波士顿房价预测实例(小样本K折验证)

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/shenziheng1/article/details/84146075

open source: deep learning with python (keras)

open code: https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/3.7-predicting-house-prices.ipynb


from keras.datasets import boston_housing
from keras import models
from keras import layers

import numpy as np
import matplotlib.pyplot as plt


def build_model():
    model = models.Sequential()
    model.add(layers.Dense(64, 
                           activation = 'relu', 
                           input_shape = (train_data.shape[1],)))
    model.add(layers.Dense(64,
                           activation = 'relu'))
    model.add(layers.Dense(1)) # activation function will limit putput range
    model.compile(optimizer = 'rmsprop',
                  loss = 'mse',
                  metrics=['mae'])
    return model
    

(train_data, train_targets), (test_data, test_targets) = \
    boston_housing.load_data()

mean = train_data.mean(axis=0)
std  = train_data.std(axis=0)
train_data -= mean
train_data /= std

test_data -= mean
test_data /= std


#++++++++++++++++K-fold validation
k = 4
num_val_samples = len(train_data) // k
num_epochs = 500
all_mae_histories = []

for i in range(k):
    print('Processing fold #', i)
    val_data = train_data[i*num_val_samples : (i+1)*num_val_samples]
    val_targets = train_targets[i*num_val_samples : (i+1)*num_val_samples]
    
    partial_train_data = np.concatenate( 
                         [train_data[: i*num_val_samples],
                         train_data[(i+1)*num_val_samples :]],
                         axis = 0)
    partial_train_targets = np.concatenate(
                         [train_targets[: i*num_val_samples],
                         train_targets[(i+1)*num_val_samples :]],
                         axis = 0)
    model = build_model()
    history = model.fit(partial_train_data,
                        partial_train_targets,
                        validation_data = (val_data, val_targets),
                        epochs = num_epochs,
                        batch_size = 1,
                        verbose = 1)
    mae_history = history.history['val_mean_absolute_error']
    all_mae_histories.append(mae_history)
    average_mae_history = [ np.mean([x[i] \
                                    for x in all_mae_histories]) \
                                    for i in range(num_epochs)]

#+++++++++++++++++ triaining final model
model = build_model()
model.fit(train_data, 
          train_targets,
          epochs = 100,
          batch_size = 16,
          verbose = 1)
test_mae, _ = model.evaluate(test_data, test_targets)


#+++++++++++++++++ matplotlib
plt.plot(range(1, len(average_mae_history)+1), average_mae_history)
plt.xlabel('Epochs')
plt.ylabel('Validation Mae')
plt.show()

总结-关于小样本K-折验证:

    为了调节神经网络(比如说神经网络的训练轮数)的同时对网络进行评估,我们需要将数据集划分为训练集和测试集。如果样本的数据量很少(小样本问题),验证集会更小,因此验证分数可能会有很大的波动,这完全取决于如何划分训练集和验证集。这就是小样本中常见的,验证集的划分方式可能会存在很大的方差,这种情况下很难对模型进行有效可靠地评估。

    最佳做法是采用K-折验证的方式,其原理图如下所示。

  • 如果训练数据集相对较小,则增大k值

         增大k值,在每次迭代过程中将会有更多的数据用于模型训练,能够得到最小偏差,同时算法时间延长。且训练块间高度相似,导致评价结果方差较高。

  • 如果训练集相对较大,则减小k值

        减小k值,降低模型在不同的数据块上进行重复拟合的性能评估的计算成本,在平均性能的基础上获得模型的准确评估。
 

猜你喜欢

转载自blog.csdn.net/shenziheng1/article/details/84146075
今日推荐