Time series prediction | MATLAB implements EEMD-GRU, GRU set empirical mode decomposition combined with gated recurrent unit time series prediction comparison

Time series prediction | MATLAB implements EEMD-GRU, GRU set empirical mode decomposition combined with gated recurrent unit time series prediction comparison

List of effects

insert image description here

insert image description here
insert image description here
insert image description here
insert image description here
insert image description here

basic introduction

1. MATLAB implements EEMD-GRU and GRU time series prediction comparison;
2. Time series prediction is to first decompose the original input into many dimensions by eemd as input, and then input GRU prediction;
3. Running environment Matlab2020b and above, output RMSE, For comparison of multiple indicators such as MAPE and MAE,
first run main1_eemd_test to perform eemd decomposition; then run main2_gru and main3_eemd_gru; then run main4_compare to compare the two models.
The garbled code of the program is caused by the inconsistency of the Matlab version. The processing method is as follows: first re-download the program, if the XXX.m program has garbled codes, then find XXX.m in the folder, right click and select the open method as Notepad text file (txt) , check whether the document is garbled, usually not garbled, then delete all the codes of XXX.m in Matlab, and copy the codes that are not garbled in the text document to XXX.m in Matlab.

model building

EEMD-GRU (Ensemble Empirical Mode Decomposition - Gated Recurrent Unit) is a method that combines EEMD and GRU for time series forecasting. EEMD is used to decompose the original time series into multiple Intrinsic Mode Functions (IMFs), and then GRU is used to model and forecast these IMFs.
EEMD is a data decomposition method that decomposes a time series into multiple IMFs and a residual term. IMFs are functions with different frequency and amplitude characteristics that can represent different components of the original time series. GRU (Gated Recurrent Unit): GRU is a variant of Recurrent Neural Network (RNN) with a gating mechanism that captures long-term dependencies in time series. GRU controls the flow of information and the updating of memory through gating units.
EEMD-GRU time series forecasting process:
a. Decompose the original time series with EEMD to get multiple IMFs and a residual term.
b. Take each IMF as the input sequence of GRU, train multiple GRU models, and each model corresponds to an IMF.
c. For each GRU model, use the input sequence at historical moments to predict the value at the next moment.
d. Weighted and summed the prediction results of each GRU model to obtain the final time series prediction results.
EEMD-GRU time series prediction formula:
Assuming there are N IMFs, the GRU model of the i-th IMF is denoted as GRU_i.
For the i-th GRU model, its input sequence is X_i = [x_i1, x_i2, …, x_iT], where x_ij represents the value of the i-th IMF at time j.
The prediction result of model GRU_i is y_i = [y_i1, y_i2, …, y_iT], where y_ij represents the predicted value of model GRU_i at time j.
The final time series prediction result is y = w_1 * y_1 + w_2 * y_2 + ... + w_N * y_N, where w_i represents the weight of the i-th GRU model.
The above are the basic principles and formulas of EEMD-GRU time series forecasting. By combining the decomposition results of EEMD with the modeling ability of GRU, the characteristics and trends of time series can be better captured and the accuracy of forecasting can be improved.

programming

%% 创建混合网络架构
% 输入特征维度
numFeatures  = f_;
% 输出特征维度
numResponses = 1;
FiltZise = 10;
%  
    layers = [...
        % 输入特征
        sequenceInputLayer([numFeatures 1 1],'Name','input')
        sequenceFoldingLayer('Name','fold')

        dropoutLayer(0.25,'Name','drop3')
        % 全连接层
        fullyConnectedLayer(numResponses,'Name','fc')
        regressionLayer('Name','output')    ];

    layers = layerGraph(layers);
    layers = connectLayers(layers,'fold/miniBatchSize','unfold/miniBatchSize');

%% 
% 批处理样本
MiniBatchSize =128;
% 最大迭代次数
MaxEpochs = 500;
    options = trainingOptions( 'adam', ...
        'MaxEpochs',500, ...
        'GradientThreshold',1, ...
        'InitialLearnRate',optVars.InitialLearnRate, ...
        'LearnRateSchedule','piecewise', ...
        'LearnRateDropPeriod',400, ...
        'LearnRateDropFactor',0.2, ...
        'L2Regularization',optVars.L2Regularization,...
        'Verbose',false, ...
        'Plots','none');

%% 训练混合网络
net = trainNetwork(XrTrain,YrTrain,layers,options);
desvio_estandar=std(x);
x=x/desvio_estandar;
xconruido=x+Nstd*randn(size(x));
[modos, o, it]=emd(xconruido,'MAXITERATIONS',MaxIter);
modos=modos/NR;
iter=it;
if NR>=2
    for i=2:NR
        xconruido=x+Nstd*randn(size(x));
        [temp, ort, it]=emd(xconruido,'MAXITERATIONS',MaxIter);
        temp=temp/NR;
        lit=length(it);
        [p liter]=size(iter);
        if lit<liter
            it=[it zeros(1,liter-lit)];
        end;
        if liter<lit
            iter=[iter zeros(p,lit-liter)];
        end;
        
        iter=[iter;it];
        
        [filas columnas]=size(temp);
        [alto ancho]=size(modos);
        diferencia=alto-filas;
        if filas>alto
            modos=[modos; zeros(abs(diferencia),ancho)];
        end;
        if alto>filas
            temp=[temp;zeros(abs(diferencia),ancho)];
        end;
        
        modos=modos+temp;
    end;
end;
its=iter;
modos=modos*desvio_estandar;

References

[1] https://blog.csdn.net/kjm13182345320/article/details/129036772?spm=1001.2014.3001.5502
[2] https://blog.csdn.net/kjm13182345320/article/details/128690229

Guess you like

Origin blog.csdn.net/kjm13182345320/article/details/132223276
Recommended