Time series prediction | MATLAB implements EMD-GWO-SVR empirical mode decomposition combined with gray wolf algorithm to optimize support vector machine time series prediction (including EMD-GWO-SVR, EMD-SVR, GWO-SVR, SVR comparison)

Time series prediction | MATLAB implements EMD-GWO-SVR empirical mode decomposition combined with gray wolf algorithm to optimize support vector machine time series prediction (including EMD-GWO-SVR, EMD-SVR, GWO-SVR, SVR comparison)

predictive effect

1
2
3
4
5
6

basic introduction

Matlab implements EMD-GWO-SVR, EMD-SVR, GWO-SVR, SVR time series prediction, empirical mode decomposition combined with gray wolf algorithm to optimize support vector machine, empirical mode decomposition combined with support vector machine, gray wolf algorithm to optimize support vector machine, support vector machine time series prediction (complete source code and data)

Model introduction

EMD-GWO-SVR is a time series forecasting method based on Empirical Mode Decomposition (EMD), Gray Wolf Optimization Algorithm (GWO) and Support Vector Machine Regression (SVR).
First, the original time series is decomposed using the EMD method to obtain multiple intrinsic mode functions (IMFs). Then, each IMF is optimized using the GWO algorithm to obtain the optimal SVR model parameters. Finally, the prediction results of all IMFs are added to get the final prediction result.
The EMD method is a method for decomposing nonlinear and non-stationary signals into several intrinsic mode functions, each IMF represents the vibration mode of the original signal on different time scales. The GWO algorithm is an optimization algorithm that simulates the hunting behavior of gray wolves, and can find the global optimal solution. SVR is a nonlinear regression method based on kernel functions, which can handle high-dimensional and nonlinear data.
The advantage of the EMD-GWO-SVR method is that it can fully mine the nonlinear and non-stationary features of the time series, and it can adaptively optimize each IMF, which improves the accuracy and robustness of the prediction. In addition, the GWO algorithm has a global search capability, which can prevent the SVR model from falling into a local optimal solution.
The EMD-GWO-SVR method can be applied to various time series forecasting problems, such as stock price forecasting, meteorological data forecasting, traffic flow forecasting, etc.

programming

%% 各算法对比
clc;clear;close all
%%
svr=load('result/SVR.mat');
result(svr.T_test,svr.T_sim2,'SVR')

gwosvr=load('result/GWO-SVR.mat');
result(gwosvr.T_test,gwosvr.T_sim2,'GWO-SVR')

emdsvr=load('result/EMD-SVR.mat');
result(emdsvr.T_test,emdsvr.T_sim2,'EMD-SVR')

emdgwosvr=load('result/EMD-GWO-SVR.mat');
result(emdgwosvr.T_test,emdgwosvr.T_sim2,'EMD-GWO-SVR')
%%
figure
plot(svr.T_test,'-r')
hold on;grid on
plot(svr.T_sim2,'-c')
plot(gwosvr.T_sim2,'-g')
plot(emdsvr.T_sim2,'-k')
plot(emdgwosvr.T_sim2,'-b')

legend('真实值','SVR预测模型','GWO-SVR预测模型','EMD-SVR预测模型','EMD-GWO-SVR预测模型')
title('各算法结果')
xlabel('预测样本点坐标')
ylabel('值')
unction [Alpha_score,Alpha_pos,Convergence_curve,curve]=GWO(SearchAgents_no,Max_iteration,lb,ub,dim,fobj)

%%  优化算法初始化
Alpha_pos = zeros(1, dim);  % 初始化Alpha狼的位置
Alpha_score = inf;          % 初始化Alpha狼的目标函数值,将其更改为-inf以解决最大化问题

Beta_pos = zeros(1, dim);   % 初始化Beta狼的位置
Beta_score = inf;           % 初始化Beta狼的目标函数值 ,将其更改为-inf以解决最大化问题

Delta_pos = zeros(1, dim);  % 初始化Delta狼的位置
Delta_score = inf;          % 初始化Delta狼的目标函数值,将其更改为-inf以解决最大化问题

%%  初始化搜索狼群的位置
Positions = initialization(SearchAgents_no, dim, ub, lb);

%%  用于记录迭代曲线
Convergence_curve = zeros(1, Max_iteration);
%%  循环计数器
iter = 0;

%%  优化算法主循环
while iter < Max_iteration           % 对迭代次数循环
    for i = 1 : size(Positions, 1)   % 遍历每个狼

        % 返回超出搜索空间边界的搜索狼群
        % 若搜索位置超过了搜索空间,需要重新回到搜索空间
        Flag4ub = Positions(i, :) > ub;
        Flag4lb = Positions(i, :) < lb;

        % 若狼的位置在最大值和最小值之间,则位置不需要调整,若超出最大值,最回到最大值边界
        % 若超出最小值,最回答最小值边界
        Positions(i, :) = (Positions(i, :) .* (~(Flag4ub + Flag4lb))) + ub .* Flag4ub + lb .* Flag4lb;   

        % 计算适应度函数值
%         Positions(i, 2) = round(Positions(i, 2));
%         fitness = fical(Positions(i, :));
          fitness = fobj(Positions(i, :));
        % 更新 Alpha, Beta, Delta
        if fitness < Alpha_score           % 如果目标函数值小于Alpha狼的目标函数值
            Alpha_score = fitness;         % 则将Alpha狼的目标函数值更新为最优目标函数值
            Alpha_pos = Positions(i, :);   % 同时将Alpha狼的位置更新为最优位置
        end

        if fitness > Alpha_score && fitness < Beta_score   % 如果目标函数值介于于Alpha狼和Beta狼的目标函数值之间
            Beta_score = fitness;                          % 则将Beta狼的目标函数值更新为最优目标函数值
            Beta_pos = Positions(i, :);                    % 同时更新Beta狼的位置
        end

        if fitness > Alpha_score && fitness > Beta_score && fitness < Delta_score  % 如果目标函数值介于于Beta狼和Delta狼的目标函数值之间
            Delta_score = fitness;                                                 % 则将Delta狼的目标函数值更新为最优目标函数值
            Delta_pos = Positions(i, :);                                           % 同时更新Delta狼的位置
        end

    end

    % 线性权重递减
    wa = 2 - iter * ((2) / Max_iteration);    

    % 更新搜索狼群的位置
    for i = 1 : size(Positions, 1)      % 遍历每个狼
        for j = 1 : size(Positions, 2)  % 遍历每个维度

            % 包围猎物,位置更新
            r1 = rand; % r1 is a random number in [0,1]
            r2 = rand; % r2 is a random number in [0,1]

            A1 = 2 * wa * r1 - wa;   % 计算系数A,Equation (3.3)
            C1 = 2 * r2;             % 计算系数C,Equation (3.4)

            % Alpha 位置更新
            D_alpha = abs(C1 * Alpha_pos(j) - Positions(i, j));   % Equation (3.5)-part 1
            X1 = Alpha_pos(j) - A1 * D_alpha;                     % Equation (3.6)-part 1

            r1 = rand; % r1 is a random number in [0,1]
            r2 = rand; % r2 is a random number in [0,1]

            A2 = 2 * wa * r1 - wa;   % 计算系数A,Equation (3.3)
            C2 = 2 *r2;              % 计算系数C,Equation (3.4)

            % Beta 位置更新
            D_beta = abs(C2 * Beta_pos(j) - Positions(i, j));    % Equation (3.5)-part 2
            X2 = Beta_pos(j) - A2 * D_beta;                      % Equation (3.6)-part 2       

            r1 = rand;  % r1 is a random number in [0,1]
            r2 = rand;  % r2 is a random number in [0,1]

            A3 = 2 *wa * r1 - wa;     % 计算系数A,Equation (3.3)
            C3 = 2 *r2;               % 计算系数C,Equation (3.4)

            % Delta 位置更新
            D_delta = abs(C3 * Delta_pos(j) - Positions(i, j));   % Equation (3.5)-part 3
            X3 = Delta_pos(j) - A3 * D_delta;                     % Equation (3.5)-part 3

            % 位置更新
            Positions(i, j) = (X1 + X2 + X3) / 3;                 % Equation (3.7)

        end
    end

    % 更新迭代器
    iter = iter + 1;    
    Convergence_curve(iter) = Alpha_score;
   curve(iter)=sum(Convergence_curve)/iter;
    disp(['第',num2str(iter),'次迭代'])
    disp(['current iteration is: ',num2str(iter), ', best fitness is: ', num2str(Alpha_score)]);
end

%%  记录最佳参数
% best_lr = Alpha_pos(1, 1);
% best_hd = Alpha_pos(1, 2);
% best_l2 = Alpha_pos(1, 3);
end
function result(true_value,predict_value,type)
disp(type)
rmse=sqrt(mean((true_value-predict_value).^2));
disp(['根均方差(RMSE):',num2str(rmse)])
mae=mean(abs(true_value-predict_value));
disp(['平均绝对误差(MAE):',num2str(mae)])
mape=mean(abs((true_value-predict_value)./true_value));
disp(['平均相对百分误差(MAPE):',num2str(mape*100),'%'])
r2 = R2(predict_value, true_value);
disp(['R平方决定系数(MAPE):',num2str(r2)])
nse = NSE(predict_value, true_value);
disp(['纳什系数(NSE):',num2str(nse)])

fprintf('\n')

References

[1] https://blog.csdn.net/kjm13182345320?spm=1010.2135.3001.5343
[2] https://mianbaoduo.com/o/bread/mbd-YpiamZpq
[3] SI Y W,YIN J. OBST-based segmentation approach to financial time series[J]. Engineering Applications of Artificial Intelligence,2013,26( 10) : 2581-2596.
[4] YUAN X,CHEN C,JIANG M,et al. Prediction Interval of Wind Power Using Parameter Optimized Beta Distribution Based LSTM Model[J]. Applied Soft Computing,2019,82:105550.143

thank you

  • Your support is the driving force for my writing!
  • Thank you for subscribing, remember to comment!

Guess you like

Origin blog.csdn.net/kjm13182345320/article/details/131832840