机器学习笔记 - 基于keras的Simple RNN训练时间序列数据

一、RNN概述

        Recurrent Neural Networks(RNN)循环神经网络。

        传统的神经网络无法记住过去的交互信息,RNN解决了这个问题。 它们是带有循环的网络,允许信息持续存在。 如下图所示,对于神经网络的节点A输入Xt,并输出一个值Ht。网络中的循环允许信息从网络的一个步骤传递到下一个步骤。

二、Simple RNN应用

1、Simple RNN

        使用 Keras 的简单 RNN 实现来根据历史数据集预测销售额。

        Simple RNN函数原型如下:

        一个Simple RNN 是一个完全连接的 RNN。

tensorflow.python.keras.layers.SimpleRNN(
               units,
               activation='tanh',
               use_bias=True,
               kernel_initializer='glorot_uniform',
               recurrent_initializer='orthogonal',
               bias_initializer='zeros',
               kernel_regularizer=None,
               recurrent_regularizer=None,
               bias_regularizer=None,
               activity_regularizer=None,
               kernel_constraint=None,
               recurrent_constraint=None,
               bias_constraint=None,
               dropout=0.,
               recurrent_dropout=0.,
               return_sequences=False,
               return_state=False,
               go_backwards=False,
               stateful=False,
               unroll=False)

2、数据集

        第一列是月份,第二列是每个月的销售数据。示例数据:

Month Sales of shampoo over a three year period
1月1日 266
1月2日 145.9
1月3日 183.1
1月4日 119.3
1月5日 180.3

        查看走势图,可以看出,销售非常不稳定,但有向上的趋势线。

 3、参考代码

# load and plot dataset
from pandas import read_csv
from pandas import datetime
from matplotlib import pyplot
import pandas as pd

# 解析时间
def parser(x):
    return datetime.strptime('200' + x, '%Y-%m')


series = read_csv('sales-of-shampoo-over-a-three-ye.csv', header=0, parse_dates=[0], index_col=0, squeeze=True,
                  date_parser=parser)
# summarize first few rows
print(series.head())

# line plot
series.plot()
pyplot.show()


from pandas import DataFrame
from pandas import Series
from pandas import concat
from pandas import read_csv
from pandas import datetime
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import SimpleRNN
#from keras.regularizers import L1L2
from math import sqrt
import matplotlib

# be able to save images on server
matplotlib.use('Agg')
from matplotlib import pyplot
import numpy

# frame a sequence as a supervised learning problem
def timeseries_to_supervised(data, lag=1):
    df = DataFrame(data)
    columns = [df.shift(i) for i in range(1, lag + 1)]
    columns.append(df)
    df = concat(columns, axis=1)
    return df


# create a differenced series
def difference(dataset, interval=1):
    diff = list()
    for i in range(interval, len(dataset)):
        value = dataset[i] - dataset[i - interval]
        diff.append(value)
    return Series(diff)


# invert differenced value
def inverse_difference(history, yhat, interval=1):
    return yhat + history[-interval]


# scale train and test data to [-1, 1]
def scale(train, test):
    # fit scaler
    scaler = MinMaxScaler(feature_range=(-1, 1))
    scaler = scaler.fit(train)
    # transform train
    train = train.reshape(train.shape[0], train.shape[1])
    train_scaled = scaler.transform(train)
    # transform test
    test = test.reshape(test.shape[0], test.shape[1])
    test_scaled = scaler.transform(test)
    return scaler, train_scaled, test_scaled


# inverse scaling for a forecasted value
def invert_scale(scaler, X, yhat):
    new_row = [x for x in X] + [yhat]
    array = numpy.array(new_row)
    array = array.reshape(1, len(array))
    inverted = scaler.inverse_transform(array)
    return inverted[0, -1]


# fit an LSTM network to training data
def fit_rnn(train, n_batch, nb_epoch, n_neurons):
    X, y = train[:, 0:-1], train[:, -1]
    X = X.reshape(X.shape[0], 1, X.shape[1])
    model = Sequential()
    model.add(SimpleRNN(n_neurons, batch_input_shape=(n_batch, X.shape[1], X.shape[2]), stateful=True))
    model.add(Dense(1))
    model.compile(loss='mean_squared_error', optimizer='adam')
    for i in range(nb_epoch):
        model.fit(X, y, epochs=1, batch_size=n_batch, verbose=1, shuffle=False)
        model.reset_states()
    return model


# run a repeated experiment
def run_rnn(series, n_lag, n_repeats, n_epochs, n_batch, n_neurons):
    # transform data to be stationary
    raw_values = series.values
    diff_values = difference(raw_values, 1)
    # transform data to be supervised learning
    supervised = timeseries_to_supervised(diff_values, n_lag)
    supervised_values = supervised.values[n_lag:, :]
    # split data into train and test-sets
    train, test = supervised_values[0:-12], supervised_values[-12:]
    # transform the scale of the data
    scaler, train_scaled, test_scaled = scale(train, test)
    # run experiment
    error_scores = list()
    for r in range(n_repeats):
        # fit the model
        train_trimmed = train_scaled[2:, :]
        rnn_model = fit_rnn(train_trimmed, n_batch, n_epochs, n_neurons)
        # forecast test dataset
        test_reshaped = test_scaled[:, 0:-1]
        test_reshaped = test_reshaped.reshape(len(test_reshaped), 1, 1)
        output = rnn_model.predict(test_reshaped, batch_size=n_batch)
        predictions = list()
        for i in range(len(output)):
            yhat = output[i, 0]
            X = test_scaled[i, 0:-1]
            # invert scaling
            yhat = invert_scale(scaler, X, yhat)
            # invert differencing
            yhat = inverse_difference(raw_values, yhat, len(test_scaled) + 1 - i)
            # store forecast
            predictions.append(yhat)
        # report performance
        rmse = sqrt(mean_squared_error(raw_values[-12:], predictions))
        print('%d) Test RMSE: %.3f' % (r + 1, rmse))
        error_scores.append(rmse)
    return error_scores


# configure the experiment
def run():
    # load dataset
    series = read_csv('sales-of-shampoo-over-a-three-ye.csv', header=0, parse_dates=[0], index_col=0, squeeze=True,
                      date_parser=parser)
    # configure the experiment
    n_lag = 1
    n_repeats = 30
    n_epochs = 1000
    n_batch = 4
    n_neurons = 3
    # run the experiment
    results = DataFrame()
    results['results'] = run_rnn(series, n_lag, n_repeats, n_epochs, n_batch, n_neurons)

    results.plot(title="RNN RMSE Iteration")

    pyplot.show()
    # summarize results
    print(results.describe())
    # save boxplot

    pyplot.savefig('plot_rnn_rmse.png')


# entry point

run()

        需要手动进行epoch循环,这样才会在每个epoch结束的时候再reset_states,删除内部状态。如果直接使用epochs=nb_epoch,不手动循环,则网络会在每个batch结束的时候就reset_states。

        训练结果如下

1) Test RMSE: 98.509
2) Test RMSE: 96.719
3) Test RMSE: 97.397
4) Test RMSE: 82.224
5) Test RMSE: 77.399
6) Test RMSE: 87.999
7) Test RMSE: 98.929
8) Test RMSE: 105.284
9) Test RMSE: 103.274
10) Test RMSE: 74.841
11) Test RMSE: 95.774
12) Test RMSE: 101.559
13) Test RMSE: 120.763
14) Test RMSE: 95.495
15) Test RMSE: 107.296
16) Test RMSE: 86.461
17) Test RMSE: 76.648
18) Test RMSE: 83.285
19) Test RMSE: 112.248
20) Test RMSE: 77.225
21) Test RMSE: 96.166
22) Test RMSE: 84.767
23) Test RMSE: 84.564
24) Test RMSE: 92.602
25) Test RMSE: 75.713
26) Test RMSE: 86.159
27) Test RMSE: 101.537
28) Test RMSE: 109.397
29) Test RMSE: 93.453
30) Test RMSE: 99.279


results
count   30.000000
mean    93.432153
std     11.782779
min     74.840867
25%     84.614619
50%     95.634082
75%    100.972315
max    120.763087

        多次训练结果,从下图中可以看出,RMSE 在迭代过程中在 74 到 120 之间变化,但随着时间的推移变得更加稳定。

猜你喜欢

转载自blog.csdn.net/bashendixie5/article/details/123614758
今日推荐