Time series prediction | MATLAB implements NGO-GRU Northern Goshawk algorithm to optimize gated cycle unit time series prediction

Time series prediction | MATLAB implements NGO-GRU Northern Goshawk algorithm to optimize gated cycle unit time series prediction

Prediction effect

Insert image description here

Insert image description here
Insert image description here
Insert image description here
Insert image description here
Insert image description here
Insert image description here

basic introduction

MATLAB implements the NGO-GRU Northern Goshawk algorithm to optimize gated cycle unit time series prediction (complete source code and data)
1. data is a data set, a univariate time series.
2.MainNGOGRUTS.m is the main program file, and the other function files do not need to be run.
3. The command window outputs MAE, MSE, RMSEP, R^2, RPD and MAPE, and the data and program content can be obtained in the download area.
4. The optimization parameters of the Northern Goshawk algorithm are learning rate, number of hidden layer nodes, and regularization parameters.
Note that the program and data are placed in one folder, and the running environment is Matlab2018 and above.

programming

%% --------------LSTM优化----------------------
% 参数设置
SearchAgents = 5;  % 种群数量 
Max_iterations =10; % 迭代次数  

lowerbound = [1e-10 0.0001 10 ];%三个参数的下限
upperbound = [1e-2 0.002 400 ];%三个参数的上限
dim = 3;%数量,即要优化的LSTM超参数个数
 
fobj = @(x)fun(x,inputn_train,outputn_train,outputps);   %调用函数fun计算适应度函数值
%% 赋值; 
[Best_score,Best_pos,Convergence_curve]=NGO(SearchAgents,Max_iterations,lowerbound,upperbound,dim,fobj)    %% 北方苍鹰算法

%得到最优参数
L2Regularization = Best_pos(1,1); % 最佳L2正则化系数
InitialLearnRate = Best_pos(1,2); % 最佳初始学习率
NumOfUnits  =abs(round( Best_pos(1,3)));   % 最佳隐藏层节点数

%% ------------------利用优化参数重新训练预测----------------------------
% 数据输入x的特征维度
inputSize  = size(inputn_train,1);
% 数据输出y的维度
outputSize = size(outputn_train,1);

%  设置网络结构
layers = [ ...
    sequenceInputLayer(inputSize)     %输入层,参数是输入特征维数
   Layer(NumOfUnits)        %学习层,隐含层神经元的个数
    dropoutLayer(0.2)                  %权重丢失率
    fullyConnectedLayer(outputSize)   %全连接层,也就是输出的维数
    regressionLayer];    %回归层,该参数说明是在进行回归问题,而不是分类问题

% trainoption(lstm)
opts = trainingOptions('adam', ...      %优化算法
    'MaxEpochs',100, ...                %最大迭代次数
    'GradientThreshold',1,...           %梯度阈值,防止梯度爆炸
    'ExecutionEnvironment','cpu',...   %对于大型数据集合、长序列或大型网络,在 GPU 上进行预测计算通常比在 CPU 上快。其他情况下,在 CPU 上进行预测计算通常更快。
    'InitialLearnRate',InitialLearnRate, ...
    'LearnRateSchedule','piecewise', ...
    'LearnRateDropPeriod',120, ...
    'LearnRateDropFactor',0.2, ...   % 指定初始学习率 0.005,在 100 轮训练后通过乘以因子 0.2 来降低学习率。
    'L2Regularization', L2Regularization, ...       % 正则化参数
    'Verbose',false, ...         %如果将其设置为true,则有关训练进度的信息将被打印到命令窗口中。
    'Plots','training-progress'...   %构建曲线图,   若将'training-progress'替换为'none',则不画出曲线
    );   % 'MiniBatchSize',outputSize*30, ...



%% -----------------预测结果-------------------------
%  数据格式转换

train_DATA=output_train';    %训练样本标签
test_DATA= output_test'; %测试样本标签


References

[1] https://blog.csdn.net/article/details/126072792?spm=1001.2014.3001.5502
[2] https://blog.csdn.net/article/details/126044265?spm=1001.2014.3001.5502

Guess you like

Origin blog.csdn.net/kjm13182345320/article/details/133051614