Regression prediction | MATLAB implements SO-CNN-BiGRU snake swarm algorithm to optimize convolution bidirectional gated recurrent unit multiple input single output regression prediction
Table of contents
predictive effect
basic introduction
MATLAB implements SO-CNN-BiGRU snake swarm algorithm to optimize convolution bidirectional gated recurrent unit multiple input single output regression prediction (complete source code and data) 1. MATLAB
implements SO-CNN-BiGRU snake swarm algorithm to optimize convolution bidirectional gated recurrent unit Multi-input single-output regression prediction (complete source code and data)
2. Input multiple features, output a single variable, multi-input single-output regression prediction;
3. Multi-index evaluation, evaluation indicators include: R2, MAE, MSE, RMSE, etc., code The quality is extremely high;
4. The optimization parameters of the snake swarm algorithm are: learning rate, hidden layer nodes, and regularization parameters;
5. Excel data, easy to replace, and the operating environment is 2020 and above.
programming
- Complete source code and data acquisition method 1: Private letter blogger or program exchange of equivalent value;
- Complete program and data download method 2 (subscribe to the "Combined Optimization" column, and at the same time obtain any 8 programs included in the "Combined Optimization" column, private message me to get the data after subscription): MATLAB implements SO-CNN-BiGRU snake swarm algorithm to optimize convolution Bidirectional Gated Recurrent Unit Multiple-Input Single-Output Regression Prediction
%% 获取最优种群
for j = 1 : SearchAgents
if(fitness_new(j) < GBestF)
GBestF = fitness_new(j);
GBestX = X_new(j, :);
end
end
%% 更新种群和适应度值
pop_new = X_new;
fitness = fitness_new;
%% 更新种群
[fitness, index] = sort(fitness);
for j = 1 : SearchAgents
pop_new(j, :) = pop_new(index(j), :);
end
%% 得到优化曲线
curve(i) = GBestF;
avcurve(i) = sum(curve) / length(curve);
end
%% 得到最优值
Best_pos = GBestX;
Best_score = curve(end);
%% 得到最优参数
NumOfUnits =abs(round( Best_pos(1,3))); % 最佳神经元个数
InitialLearnRate = Best_pos(1,2) ;% 最佳初始学习率
L2Regularization = Best_pos(1,1); % 最佳L2正则化系数
%
inputSize = k;
outputSize = 1; %数据输出y的维度
% 参数设置
opts = trainingOptions('adam', ... % 优化算法Adam
'MaxEpochs', 20, ... % 最大训练次数
'GradientThreshold', 1, ... % 梯度阈值
'InitialLearnRate', InitialLearnRate, ... % 初始学习率
'LearnRateSchedule', 'piecewise', ... % 学习率调整
'LearnRateDropPeriod', 6, ... % 训练次后开始调整学习率
'LearnRateDropFactor',0.2, ... % 学习率调整因子
'L2Regularization', L2Regularization, ... % 正则化参数
'ExecutionEnvironment', 'gpu',... % 训练环境
'Verbose', 0, ... % 关闭优化过程
'SequenceLength',1,...
'MiniBatchSize',10,...
'Plots', 'training-progress'); % 画出曲线
References
[1] https://blog.csdn.net/kjm13182345320/article/details/128577926?spm=1001.2014.3001.5501
[2] https://blog.csdn.net/kjm13182345320/article/details/128573597?spm=1001.2014.3001.5501