Whale optimization algorithm WOA optimized BP neural network (WOA-BP) regression prediction-Matlab code implementation

1. Whale optimization algorithm WOA

Whale Optimization Algorithm (Whale Optimization Algorithm, WOA) is a new heuristic optimization algorithm proposed         by Mirjalili et al. in 2016 . The WOA algorithm is inspired by the hunting behavior of humpback whales. As social mammals, they will round up their prey through mutual cooperation when hunting. Whales have two behaviors of encircling and driving in group hunting. The whales in the group adopt Move towards other whales to surround prey, swim in a circle and eject bubbles to form bubble nets to drive prey away. This unique hunting method is called bubble net foraging. The core idea of ​​the WOA algorithm is derived from the humpback whale’s special bubble net foraging method. It simulates the hunting behavior of whales through random or optimal search agents, and simulates the attack mechanism of bubble net foraging through spirals. Mathematical modeling achieves the purpose of optimization. The WOA algorithm has the advantages of simple operation, few adjustment parameters and strong ability to jump out of local optimum . The algorithm flow of WOA is:

(1) Initialize WOA parameters, including initializing the size of the whale population, the maximum number of iterations, and the location of the whale population;

(2) Calculate the fitness of the whale population, and select the initial number of whale populations according to the fitness;

(3) Calculate the individual fitness and select the current optimal position;

(4) Iteratively update the location of the next generation of whale groups;

(5) When the termination condition is reached, the optimal individual is output, that is, the global optimal solution is found.

2. Construction of WOA-BP prediction model

        The WOA algorithm optimizes the initial weight threshold of the BP neural network, thereby establishing a stable WOA-BP prediction model and improving the prediction accuracy and generalization ability. The specific process is as follows:

(1) Normalize data, establish BP neural network, determine topology and initialize network weights and thresholds;

(2) Initialize the WOA parameters, calculate the decision length of the WOA algorithm, and select the mean square error as the optimized objective function;

(3) Set the algorithm stop criterion, and use WOA to optimize the weight threshold parameters of the BP neural network;

(4) The optimized weight threshold parameters of the WOA algorithm are assigned to the BP neural network, that is, the optimal WOA-BP model is output, and the WOA-BP is used for training and prediction and compared with the BP network before optimization.

Code acquisition: click here to jump directly

3. Key code

%% 建立BP模型
net=newff(inputn,outputn,hiddennum_best,{'tansig','purelin'},'trainlm');

% 设置BP参数
net.trainParam.epochs=1000;        % 训练次数
net.trainParam.lr=0.01;            % 学习速率
net.trainParam.goal=0.00001;       % 训练目标最小误差
net.trainParam.show=25;            % 显示频率
net.trainParam.mc=0.01;            % 动量因子
net.trainParam.min_grad=1e-6;      % 最小性能梯度
net.trainParam.max_fail=6;         % 最高失败次数

%% 设置WOA参数
popsize=30;    %初始种群规模
maxgen=50;     %最大进化代数
dim=inputnum*hiddennum_best+hiddennum_best+hiddennum_best*outputnum+outputnum;    %自变量个数
lb=repmat(-3,1,dim);    %自变量下限
ub=repmat(3,1,dim);     %自变量上限
%初始化位置向量和领导者得分
Leader_pos=zeros(1,dim);
Leader_score=10^20;   

%% 初始化种群
for i=1:dim
    ub_i=ub(i);
    lb_i=lb(i);
   Positions(:,i)=rand(popsize,1).*(ub_i-lb_i)+lb_i;
end
curve=zeros(maxgen,1);%初始化收敛曲线

%% 进化过程
h0=waitbar(0,'WOA优化中...');
for t=1:maxgen
    for i=1:size(Positions,1)%对每个个体一个一个检查是否越界
        % 返回超出搜索空间边界的搜索代理
        Flag4ub=Positions(i,:)>ub;
        Flag4lb=Positions(i,:)<lb;
        Positions(i,:)=(Positions(i,:).*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb;
       fit(i)=fitness(Positions(i,:),inputnum,hiddennum_best,outputnum,net,inputn,outputn,output_train,inputn_test,outputps,output_test);     
        % 更新领导者位置
        if fit(i)<Leader_score
            Leader_score=fit(i);
            Leader_pos=Positions(i,:);
        end
    end
    
    a=2-t*((2)/maxgen);
    a2=-1+t*((-1)/maxgen);
    %参数更新
    for i=1:size(Positions,1)
        r1=rand();r2=rand();
        A=2*a*r1-a;
        C=2*r2;
        b=1;
        l=(a2-1)*rand+1;
        p = rand();
        for j=1:size(Positions,2)%对每一个个体地多维度进行循环运算
            %收缩包围机制
            if p<0.5
                if abs(A)>=1
                    rand_leader_index = floor(popsize*rand()+1);
                    X_rand = Positions(rand_leader_index, :);
                    D_X_rand=abs(C*X_rand(j)-Positions(i,j));
                    Positions(i,j)=X_rand(j)-A*D_X_rand;
                elseif abs(A)<1
                    D_Leader=abs(C*Leader_pos(j)-Positions(i,j));
                    Positions(i,j)=Leader_pos(j)-A*D_Leader;
                end
            elseif p>=0.5
                distance2Leader=abs(Leader_pos(j)-Positions(i,j));
                Positions(i,j)=distance2Leader*exp(b.*l).*cos(l.*2*pi)+Leader_pos(j);
            end
        end
    end
    curve(t)=Leader_score;
    waitbar(t/maxgen,h0)
end
close(h0)
setdemorandstream(pi);

%% 权重阈值更新
w1=Leader_pos(1:inputnum*hiddennum_best);   
B1=Leader_pos(inputnum*hiddennum_best+1:inputnum*hiddennum_best+hiddennum_best);  
w2=Leader_pos(inputnum*hiddennum_best+hiddennum_best+1:inputnum*hiddennum_best+hiddennum_best+hiddennum_best*outputnum);   
B2=Leader_pos(inputnum*hiddennum_best+hiddennum_best+hiddennum_best*outputnum+1:inputnum*hiddennum_best+hiddennum_best+hiddennum_best*outputnum+outputnum);  
% 矩阵重构
net.iw{1,1}=reshape(w1,hiddennum_best,inputnum);
net.lw{2,1}=reshape(w2,outputnum,hiddennum_best);
net.b{1}=reshape(B1,hiddennum_best,1);
net.b{2}=reshape(B2,outputnum,1);

4. Simulation results

(1) According to the empirical formula, the optimal number of hidden layer nodes is obtained through the number of input and output nodes:

(2) Prediction comparison chart and error chart of WOA-BP and BP

 (3) Various error indicators and prediction accuracy of BP and WOA-BP

(4) Whale optimization algorithm WOA fitness evolution curve

 (5) Regression graph of BP and WOA-BP models

 (6) Error histograms of BP and WOA-BP models

 Four. Conclusion

    It should be noted that both the whale algorithm and the BP neural network are algorithms based on randomness, so the optimization results of the same set of parameters may be different, and repeated experiments are required to verify the robustness and reliability of the model.

Guess you like

Origin blog.csdn.net/baoliang12345/article/details/130493695