Deep Neural Networks for Time Series Forecasting Based on Stacked Autoencoders

Adaptive Iterative Extended Kalman Filtering Algorithm (AIEK) is a filtering algorithm whose purpose is to gradually adapt to different states and environments through an iterative process, thereby optimizing the filtering effect.

The basic idea of ​​the algorithm is to adaptively adjust the parameters of the filter according to the observed data and the state equation in each step of the iterative process, so as to better fit the distribution of the actual data. Specifically, the algorithm includes the following steps:

Initialization: First, set an initial value for the initial parameters of the filter, these parameters include state transition matrix, measurement matrix, process noise covariance and measurement noise covariance, etc.
Prediction: According to the current state equation and filter parameters, predict the next state and calculate the prediction error.
Correction: Based on the predicted results and the actual observed data, the prediction is corrected to better fit the distribution of the actual data.
Parameter update: According to the correction results, the filter parameters are adaptively adjusted to better fit the data in the next iteration.
The algorithm is self-adaptive and iterative, and can gradually adapt to different states and environments, thereby optimizing the filtering effect. In practical applications, different filter parameter adjustment methods and iteration strategies can be selected according to specific problems to obtain better filtering effects.

Clear environment variables

warning off             % 关闭报警信息
close all               % 关闭开启的图窗
clear                   % 清空变量
clc                     % 清空命令行

Import data (20-step sliding window has been processed)

x_train = xlsread('P_x_train.xlsx','B2:U5043');
y_train = xlsread('P_y_train.xlsx','B1:B5042');
x_test = xlsread('P_x_test.xlsx','B2:U1720');
y_test = xlsread('P_y_test.xlsx','B1:B1719');

Data normalization

[p_train, ps_input] = mapminmax(P_train, 0, 1);
p_test = mapminmax('apply', P_test, ps_input);

[t_train, ps_output] = mapminmax(T_train, 0, 1);
t_test = mapminmax('apply', T_test, ps_output);

transpose to fit model

p_train = p_train'; p_test = p_test';
t_train = t_train'; t_test = t_test';

Build deep neural networks

hidden   = [f_, 50, 50];                     % 自动编码器的隐藏层节点个数[50, 50]
sae_lr   = 0.5;                              % 自动编码器的学习率
sae_mark = 0;                                % 自动编码器的噪声覆盖率(不为零时,为堆叠去噪自动编码器)
sae_act  = 'sigm';                           % 自动编码器的激活函数

opts.numepochs = 500;                        % 自动编码器的最大训练次数
opts.batchsize = M / 2;                      % 每次训练样本个数 需满足:(M / batchsize = 整数)

%%  建立堆叠自编码器
sae = saesetup(hidden);

%%  堆叠自动编码器参数设置
for i = 1 : length(hidden) - 1
    sae.ae{
    
    i}.activation_function     = sae_act;    % 激活函数
    sae.ae{
    
    i}.learningRate            = sae_lr;     % 学习率
    sae.ae{
    
    i}.inputZeroMaskedFraction = sae_mark;   % 噪声覆盖率
end

%%  训练堆叠自动编码器
sae = saetrain(sae, p_train, opts);

%%  建立深层神经网络
nn = nnsetup([hidden, outdim]);

Calculation of relevant indicators

%  R2
R1 = 1 - norm(T_train - T_sim1')^2 / norm(T_train - mean(T_train))^2;
R2 = 1 - norm(T_test  - T_sim2')^2 / norm(T_test  - mean(T_test ))^2;

disp(['训练集数据的R2为:', num2str(R1)])
disp(['测试集数据的R2为:', num2str(R2)])

%  MAE
mae1 = sum(abs(T_sim1' - T_train)) ./ M ;
mae2 = sum(abs(T_sim2' - T_test )) ./ N ;

disp(['训练集数据的MAE为:', num2str(mae1)])
disp(['测试集数据的MAE为:', num2str(mae2)])

%  MBE
mbe1 = sum(T_sim1' - T_train) ./ M ;
mbe2 = sum(T_sim2' - T_test ) ./ N ;

disp(['训练集数据的MBE为:', num2str(mbe1)])
disp(['测试集数据的MBE为:', num2str(mbe2)])

Insert image description here
Insert image description here

Insert image description here

Guess you like

Origin blog.csdn.net/m0_37702416/article/details/132795969