Time series prediction | MATLAB implements BO-GRU Bayesian optimization gated recurrent unit time series prediction

Time series prediction | MATLAB implements BO-GRU Bayesian optimization gated recurrent unit time series prediction

List of effects

1
2
3
4

5
6
7
8
9

basic introduction

MATLAB implements BO-GRU Bayesian optimal gated recurrent unit time series forecasting. Time series forecasting based on Bayesian optimized gated recurrent unit, BO-GRU/Bayes-GRU time series forecasting model.
1. The optimization parameters are: learning rate, hidden layer nodes, and regularization parameters.
2. Evaluation indicators include: R2, MAE, MSE, RMSE and MAPE, etc.
3. The operating environment is matlab2018b and above.

model building

BO-GRU (Bayesian Optimization Gated Recurrent Unit) is a method that combines Bayesian optimization and Gated Recurrent Unit (GRU) for time series forecasting tasks. In time series forecasting, we try to predict future values ​​based on past observations.
Gated Recurrent Unit (GRU) is a variant of Recurrent Neural Network (RNN), which has more powerful modeling capabilities than traditional RNN. It controls the flow of information by using update gates and reset gates to better capture long-term dependencies in time series.
Bayesian optimization is a method for optimization problems that can sample an unknown objective function and adjust the sampling position according to the existing samples. This approach can help us efficiently find the optimal solution in the search space.
The basic idea of ​​BO-GRU is to use Bayesian optimization to automatically tune the hyperparameters of the GRU model for better time series forecasting performance. The Bayesian optimization algorithm selects the next hyperparameter configuration for evaluation based on the existing model performance samples, searches the hyperparameter space step by step, and uses the Bayesian inference method to update the probability distribution of the hyperparameters. In this way, BO-GRU can find better hyperparameter configurations in relatively few model training iterations, thereby improving the accuracy of time series forecasting.
To sum up, BO-GRU is a method that combines Bayesian optimization and gated recurrent units for time series forecasting tasks. It improves model performance by automatically tuning hyperparameters and is able to better capture long-term dependencies in time series.

  • BO-GRU Bayesian Optimal Gated Recurrent Unit Time Series Forecast
    Pseudocode
    9
  • Adjust model parameters by tuning optimization algorithms, learn repetition rate and Bayesian optimization hyperparameters to tune model parameters.

programming

  • Complete program and data acquisition method 1: Private message bloggers reply to MATLAB to realize BO-GRU Bayesian optimized gated recurrent unit time series prediction , exchange of programs of equal value;
  • Complete program and data download method 2 (download directly from the resource office): MATLAB implements BO-GRU Bayesian optimized gated recurrent unit time series prediction ;
  • Complete program and data download method 3 (subscribe to the "GRU Gated Recurrent Unit" column, and at the same time you can read the content of the "GRU Gated Recurrent Unit" column, and private message me to get the data after subscription): MATLAB implements BO-GRU Bayesian optimization gating For recurrent unit time series forecasting , only this program can be obtained outside the column.
%%  优化算法参数设置
%参数取值上界(学习率,隐藏层节点,正则化系数)
%%  贝叶斯优化参数范围
optimVars = [
    optimizableVariable('NumOfUnits', [10, 50], 'Type', 'integer')
    optimizableVariable('InitialLearnRate', [1e-3, 1], 'Transform', 'log')
    optimizableVariable('L2Regularization', [1e-10, 1e-2], 'Transform', 'log')];

%% 创建网络架构
% 输入特征维度
numFeatures  = f_;
% 输出特征维度
numResponses = 1;
FiltZise = 10;
%  创建"LSTM"模型
    layers = [...
        % 输入特征
        sequenceInputLayer([numFeatures 1 1],'Name','input')
        sequenceFoldingLayer('Name','fold')
        % 特征学习       

        dropoutLayer(0.25,'Name','drop3')
        % 全连接层
        fullyConnectedLayer(numResponses,'Name','fc')
        regressionLayer('Name','output')    ];

    layers = layerGraph(layers);
    layers = connectLayers(layers,'fold/miniBatchSize','unfold/miniBatchSize');


% 批处理样本
MiniBatchSize =128;
% 最大迭代次数
MaxEpochs = 500;
    options = trainingOptions( 'adam', ...
        'MaxEpochs',500, ...
        'GradientThreshold',1, ...
        'InitialLearnRate',optVars.InitialLearnRate, ...
        'LearnRateSchedule','piecewise', ...
        'LearnRateDropPeriod',400, ...
        'LearnRateDropFactor',0.2, ...
        'L2Regularization',optVars.L2Regularization,...
        'Verbose',false, ...
        'Plots','none');

%% 训练混合网络
net = trainNetwork(XrTrain,YrTrain,layers,options);

References

[1] https://blog.csdn.net/kjm13182345320/article/details/129036772?spm=1001.2014.3001.5502
[2] https://blog.csdn.net/kjm13182345320/article/details/128690229

Guess you like

Origin blog.csdn.net/kjm13182345320/article/details/132168650