Interval forecast | MATLAB implements QRBiGRU two-way gated recurrent unit quantile regression time series interval forecast

Interval forecast | MATLAB implements QRBiGRU two-way gated recurrent unit quantile regression time series interval forecast

List of effects

  • advanced version
    1
    2
    3

  • basic version
    4

basic introduction

MATLAB Realizes QRBiGRU Bidirectional Gated Recurrent Unit Quantile Regression Time Series Interval Prediction
QRBiGRU Bidirectional Gated Recurrent Unit Quantile Regression is a deep learning model that is often used for time series prediction. The model combines Gated Recurrent Unit (GRU) and Quantile Regression (Quantile Regression), which can provide estimates of different quantiles of sequence data in prediction, thereby improving the accuracy of prediction.
In interval prediction of time series, QRBiGRU can predict the range of values ​​in the future based on historical data. Specifically, the model learns patterns and trends in historical data to predict upper and lower bounds for future data with a certain confidence level. Therefore, the model has been widely used in finance, weather forecasting, transportation and other fields.
It should be noted that the training and application of the QRBiGRU model requires a large amount of data and computing resources. When using this model for interval prediction of time series, it is necessary to select appropriate quantiles and parameters, and at the same time to optimize and verify the model to ensure that the model has good prediction effect and stability.

Model description

Quantile regression is simple regression, like ordinary least squares, but instead of minimizing the sum of squared errors, it minimizes the sum of absolute errors resulting from the chosen quantile cut point. If q=0.50 (the median), then there is a special case for quantile regression - the smallest absolute error (since the median is the central quantile). We can tune the hyperparameter q to choose a threshold that is appropriate for balancing false positives and false negatives specific to the problem we want to solve. GRU has two gates, a reset gate and an update gate. Intuitively, the reset gate determines how new input information is combined with the previous memory, and the update gate defines the amount of previous memory saved to the current time step. If we set the reset gate to 1 and the update gate to 0, then we get the standard RNN model again.

1. Matlab realizes the time series interval prediction model based on QRBiGRU quantile regression bidirectional gated recurrent unit;
2. Multi-map output, multi-index output (MAE, RMSE, MSE, R2), multi-input and single-output, including different confidence intervals Graphs, probability density graphs;
3. data is a data set, power data set, using variables in the past period of time to predict the target, the target is the last column, and can also be applied to load forecasting and wind speed forecasting; MainQRGRUTS is the main program, and the rest are functions file, no need to run;

programming

  • The complete program and data acquisition method of the basic version (the basic version is one-way GRU), subscribe to "GRU Gated Recurrent Unit" (private message me to obtain the data after subscription): MATLAB implements QRGRU Gated Recurrent Unit Quantile Regression Time Series Interval Prediction
  • The advanced version of the complete program and data acquisition method: Private message bloggers.
%--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
%构建网络 
layers = [sequenceInputLayer(numFeatures)
 
dropoutLayer(0.2)%丢弃层概率 
 reluLayer('name','relu')% 激励函数 RELU 
fullyConnectedLayer(numResponses)
regressionLayer];
%--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
XTrain=XTrain';
YTrain=YTrain';
%--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
%% 结构参数 
options = trainingOptions('adam', ... % adam优化算法 自适应学习率 
'MaxEpochs',500,...% 最大迭代次数 
'MiniBatchSize',5, ...%最小批处理数量 
'GradientThreshold',1, ...%防止梯度爆炸 
'InitialLearnRate',0.005, ...% 初始学习率 
'LearnRateSchedule','piecewise', ...
 'LearnRateDropPeriod',125, ...%125次后 ,学习率下降 
'LearnRateDropFactor',0.2, ...%下降因子 0.2
'ValidationData',{
    
    XTrain,YTrain}, ...
 'ValidationFrequency',5, ...%每五步验证一次 
'Verbose',1, ...
 'Plots','training-progress');
%% 训练网络 
net = trainNetwork(XTrain,YTrain,layers,options);
%% 测试样本标准化处理 
dataTestStandardized = (data_Test - mu) / sig;
XTest = dataTestStandardized(1:end-1,:)%测试输入 
YTest = data_Test(2:end,:);%测试输出 
%--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
XTest=XTest';
————————————————
版权声明:本文为CSDN博主「机器学习之心」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/kjm13182345320/article/details/126805601
layers = [ ...
    sequenceInputLayer(inputSize,'name','input')   %输入层设置
    
    dropoutLayer(0.3,'name','dropout_1')
    gruLayer(numhidden_units2,'Outputmode','last','name','hidden2') 
    dropoutLayer(0.3,'name','drdiopout_2')
    fullyConnectedLayer(outputSize,'name','fullconnect')   % 全连接层设置(影响输出维度)(cell层出来的输出层) %
    quanRegressionLayer('out',i)];
%-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
% 参数设定
opts = trainingOptions('adam', ...
    'MaxEpochs',10, ...
    'GradientThreshold',1,...
    'ExecutionEnvironment','cpu',...
    'InitialLearnRate',0.001, ...
    'LearnRateSchedule','piecewise', ...
    'LearnRateDropPeriod',2, ...   %2个epoch后学习率更新
    'LearnRateDropFactor',0.5, ...
    'Shuffle','once',...  % 时间序列长度
    'SequenceLength',1,...
    'MiniBatchSize',24,...
    'Verbose',0);
%-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
%
% 网络训练
%-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
y = Test.demand;
x = Test{
    
    :,3:end};
%-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
% 归一化
[xnorm,xopt] = mapminmax(x',0,1);
xnorm = mat2cell(xnorm,size(xnorm,1),ones(1,size(xnorm,2)));
[ynorm,yopt] = mapminmax(y',0,1);
ynorm = ynorm';
        % 平滑层
        flattenLayer('Name','flatten')
        % 特征学习
        
        dropoutLayer(0.25,'Name','drop3')
        % 全连接层
        fullyConnectedLayer(numResponses,'Name','fc')
        regressionLayer('Name','output')    ];

    layers = layerGraph(layers);
    layers = connectLayers(layers,'fold/miniBatchSize','unfold/miniBatchSize');
————————————————
版权声明:本文为CSDN博主「机器学习之心」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/kjm13182345320/article/details/130447132

References

[1] https://blog.csdn.net/kjm13182345320/article/details/127931217
[2] https://blog.csdn.net/kjm13182345320/article/details/127418340
[3] https://blog.csdn.net/kjm13182345320/article/details/127380096

Guess you like

Origin blog.csdn.net/kjm13182345320/article/details/130557788
Recommended