时序预测 | Matlab实现INFO-ELM向量加权算法优化极限学习机时间序列预测

时序预测 | Matlab实现INFO-ELM向量加权算法优化极限学习机时间序列预测

效果一览

1
2

3

基本介绍

Matlab实现INFO-ELM向量加权算法优化极限学习机时间序列预测
1.data为单变量时间序列数据集,运行环境Matlab2018b及以上。
2.运行主程序文件,其余为函数文件,无需运行。
3.命令窗口输出MAE、MBE和R2,可在下载区获取数据和程序内容。
4.Matlab向量加权算法优化极限学习机(INFO-ELM)时间序列预测,优化参数为权值和阈值。

程序设计

%%  优化参数设置
dim = hiddennum * inputnum + hiddennum;             % 优化参数个数
lb  = -1 * ones(1, dim);                            % 优化参数目标下限
ub  =  1 * ones(1, dim);                            % 优化参数目标上限

%%  优化算法
fobj = @(x)fun(x, p_train, t_train, hiddennum);
[Best_pos,Best_score, curve] = INFO(Particles_no, Max_iter, lb, ub, dim, fobj);

%%  获取最优权值
w1 = Best_pos(1 : inputnum * hiddennum);
B1 = Best_pos(inputnum * hiddennum + 1 : inputnum * hiddennum + hiddennum);

IW = reshape(w1, hiddennum, inputnum);
B  = reshape(B1, hiddennum, 1);

%%  网络训练
[IW, B, LW, TF, TYPE] = elmtrain(p_train, t_train, 'sig', 0, IW, B);

%%  网络预测
T_sim1 = elmpredict(p_train, IW, B, LW, TF, TYPE);
T_sim2 = elmpredict(p_test , IW, B, LW, TF, TYPE);

%%  均方根误差
error1 = sqrt(sum((T_sim1 - T_train).^2) ./ M);
error2 = sqrt(sum((T_sim2 - T_test ).^2) ./ N);

%%  绘图
figure
plot(1: M, T_train, 'r-*', 1: M, T_sim1, 'b-o', 'LineWidth', 1)
legend('真实值','预测值')
xlabel('预测样本')
ylabel('预测结果')
string = {
    
    '训练集预测结果对比'; ['RMSE=' num2str(error1)]};
title(string)
xlim([1, M])
grid

figure
plot(1: N, T_test, 'r-*', 1: N, T_sim2, 'b-o', 'LineWidth', 1)
legend('真实值','预测值')
xlabel('预测样本')
ylabel('预测结果')
string = {
    
    '测试集预测结果对比';['RMSE=' num2str(error2)]};
title(string)
xlim([1, N])
grid


%-------------------------------------------------------------------------------------------------------------------------
function [Best_pos,Best_Cost,curve,avcurve]=INFO(pop,Max_iter,lb,ub,dim,fobj)
%% Initialization
        Cost=zeros(pop,1);
        M=zeros(pop,1);
        
        X=initialization(pop,dim,ub,lb);
        
        for i=1:pop
           Cost(i) = fobj(X(i,:)); 
           M(i)=Cost(i);
        end
        
        [~, ind]=sort(Cost);
        Best_pos = X(ind(1),:);
        Best_Cost = Cost(ind(1));
        
        Worst_Cost = Cost(ind(end));
        Worst_X = X(ind(end),:);
        
        I=randi([2 5]);
        Better_X=X(ind(I),:);
        Better_Cost=Cost(ind(I));
        
%% Main Loop of INFO
        for it=1:Max_iter
            alpha=2*exp(-4*(it/Max_iter));                                                           % Eqs. (5.1) & % Eq. (9.1)                                     
            
            M_Best=Best_Cost;
            M_Better=Better_Cost;
            M_Worst=Worst_Cost;
            
            for i=1:pop
                
               % Updating rule stage
                del=2*rand*alpha-alpha;                                                           % Eq. (5)
                sigm=2*rand*alpha-alpha;                                                          % Eq. (9)                                          
                                                                 
                % Select three random solution
                A1=randperm(pop);
                A1(A1==i)=[];
                a=A1(1);b=A1(2);c=A1(3);
                
                e=1e-25;
                epsi=e*rand;
                
                omg = max([M(a) M(b) M(c)]);
                MM = [(M(a)-M(b)) (M(a)-M(c)) (M(b)-M(c))];
                
                W(1) = cos(MM(1)+pi)*exp(-(MM(1))/omg);                                           % Eq. (4.2)
                W(2) = cos(MM(2)+pi)*exp(-(MM(2))/omg);                                           % Eq. (4.3)
                W(3)= cos(MM(3)+pi)*exp(-(MM(3))/omg);                                            % Eq. (4.4)
                Wt = sum(W);
                
                WM1 = del.*(W(1).*(X(a,:)-X(b,:))+W(2).*(X(a,:)-X(c,:))+ ...                      % Eq. (4.1)
                    W(3).*(X(b,:)-X(c,:)))/(Wt+1)+epsi;
                
                omg = max([M_Best M_Better M_Worst]);
                MM = [(M_Best-M_Better) (M_Best-M_Better) (M_Better-M_Worst)];
                
                W(1) = cos(MM(1)+pi)*exp(-MM(1)/omg);                                             % Eq. (4.7)
                W(2) = cos(MM(2)+pi)*exp(-MM(2)/omg);                                             % Eq. (4.8)
                W(3) = cos(MM(3)+pi)*exp(-MM(3)/omg);                                             % Eq. (4.9)
                Wt = sum(W);
                
                WM2 = del.*(W(1).*(Best_pos-Better_X)+W(2).*(Best_pos-Worst_X)+ ...                   % Eq. (4.6)
                    W(3).*(Better_X-Worst_X))/(Wt+1)+epsi;
                
                % Determine MeanRule 
                r = unifrnd(0.1,0.5);
                MeanRule = r.*WM1+(1-r).*WM2;                                                     % Eq. (4)
                
                if rand<0.5
                    z1 = X(i,:)+sigm.*(rand.*MeanRule)+randn.*(Best_pos-X(a,:))/(M_Best-M(a)+1);
                    z2 = Best_pos+sigm.*(rand.*MeanRule)+randn.*(X(a,:)-X(b,:))/(M(a)-M(b)+1);
                else                                                                              % Eq. (8)
                    z1 = X(a,:)+sigm.*(rand.*MeanRule)+randn.*(X(b,:)-X(c,:))/(M(b)-M(c)+1);
                    z2 = Better_X+sigm.*(rand.*MeanRule)+randn.*(X(a,:)-X(b,:))/(M(a)-M(b)+1);
                end
                
               % Vector combining stage
                u=zeros(1,dim);
                for j=1:dim
                    mu = 0.05*randn;
                    if rand <0.5 
                        if rand<0.5
                            u(j) = z1(j) + mu*abs(z1(j)-z2(j));                                   % Eq. (10.1)
                        else
                            u(j) = z2(j) + mu*abs(z1(j)-z2(j));                                   % Eq. (10.2)
                        end
                    else
                        u(j) = X(i,j);                                                            % Eq. (10.3)
                    end
                end               

学习总结

极限学习机,为人工智能机器学习领域中的一种人工神神经网络模型,是一种求解单隐层前馈神经网路的学习演算法。极限学习机是用于分类、回归、聚类、稀疏逼近、压缩和特征学习的前馈神经网络,具有单层或多层隐层节点,其中隐层节点的参数(不仅仅是将输入连接到隐层节点的权重)不需要被调整。这些隐层节点可以随机分配并且不必再更新(即它们是随机投影但具有非线性变换),或者可以从其祖先继承下来而不被更改。在大多数情况下,隐层节点的输出权重通常是一步学习的,这本质上相当于学习一个线性模型。

参考资料

[1] G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme learning machine: A new learning scheme of feedforward neural networks,” in Proc. Int. Joint Conf. Neural Networks, July 2004, vol. 2, pp. 985–990.
[2] https://blog.csdn.net/kjm13182345320/article/details/127361354

猜你喜欢

转载自blog.csdn.net/kjm13182345320/article/details/131273923