Particle Swarm Optimization PSO Optimization BP Neural Network (PSO-BP) Regression Prediction-Matlab Code Implementation

1. Particle swarm algorithm PSO

        Particle swarm optimization (PSO) is a classic heuristic algorithm proposed by Kennedy et al. in 1995. Inspired by the research on the predation behavior of birds, PSO is to make the group position from disorder to order in the solution space through the cooperation and information sharing among individuals in the group, and the group members learn from their own and other members' experience , constantly changing the search mode to find the optimal solution. Due to the advantages of less adjustment parameters and fast convergence speed, PSO is currently widely used in the fields of neural network training optimization and other function optimization.

        The basic idea of ​​the PSO algorithm is to simulate the individuals in the flock of birds through particles, and these particles have the attributes of speed and position. Each particle in the particle swarm searches for the optimal solution independently in its own search space, and shares information with other particles to find the current global optimal solution. Each particle then adjusts its speed and position according to the current global optimal solution, continuously Iterative update to obtain the global optimal solution. The implementation steps of the PSO algorithm are as follows:

(1) Initialize parameters such as the maximum number of iterations, population size, individual learning factor, social learning factor, inertia weight, etc., initialize the particle swarm position, and calculate the initial particle fitness to obtain the initial optimal particle;

(2) Calculate the fitness of the population and update the position of the optimal particle of the current particle swarm;

(3) Update the position and fitness of the global optimal particle so far;

(4) Repeat steps (2)~(3) until the optimal individual position and optimal fitness are obtained, then jump out of the loop. After a finite number of iterations, each particle in the particle swarm will approach the optimal solution.

2. Construction of PSO-BP prediction model

        In the process of establishing the BP neural network, the random setting of connection weights will lead to errors in the prediction results, and the gradient descent training has the disadvantages of slow speed and local minima, and it is difficult to achieve the global optimum in the training of the neural network. The particle swarm optimization algorithm PSO is used to optimize it to improve the prediction accuracy and generalization ability. The build process is:

(1) Normalize data, establish BP neural network, determine topology and initialize network weights and thresholds;

(2) Initialize PSO parameters, parameters such as maximum number of iterations, population size, individual learning factor, social learning factor, inertia weight, etc.;

(3) Initialize the population position of PSO, and calculate the number of variable elements that need to be optimized according to the BP neural network structure;

(4) PSO optimization, the fitness function is set to the mean square error predicted by the BP network, the PSO optimization process is cycled, the position of the optimal particle is continuously updated until the maximum number of iterations, and the PSO algorithm is terminated;

(5) The optimized weight threshold parameters of the PSO algorithm are assigned to the BP neural network, that is, the optimal PSO-BP model is output, and the PSO-BP is used for training and prediction and compared with the BP network before optimization.

Code acquisition: directly click here to jump icon-default.png?t=N3I4http://t.csdn.cn/GIUzd

3. Key code

%% 建立BP模型
net=newff(inputn,outputn,hiddennum_best,{'tansig','purelin'},'trainlm');
 
% 设置BP参数
net.trainParam.epochs=1000;        % 训练次数
net.trainParam.lr=0.01;            % 学习速率
net.trainParam.goal=0.00001;       % 训练目标最小误差
net.trainParam.show=25;            % 显示频率
net.trainParam.mc=0.01;            % 动量因子
net.trainParam.min_grad=1e-6;      % 最小性能梯度
net.trainParam.max_fail=6;         % 最高失败次数

%% 初始化PSO参数
popsize=10;   %初始种群规模
maxgen=50;   %最大进化代数
dim=inputnum*hiddennum_best+hiddennum_best*hiddennum_best+hiddennum_best+hiddennum_best*outputnum+outputnum;    %自变量个数
lb=repmat(-3,1,dim);    %自变量下限
ub=repmat(3,1,dim);   %自变量上限
c1 = 2;  % 每个粒子的个体学习因子,也称为个体加速常数
c2 = 2;  % 每个粒子的社会学习因子,也称为社会加速常数
w = 0.9;  % 惯性权重
vmax =3*ones(1,dim); % 粒子的最大速度
vmax=repmat(vmax,popsize,1);

%% 初始化粒子的位置和速度
x = zeros(popsize,dim);
for i = 1: dim
    x(:,i) = lb(i) + (ub(i)-lb(i))*rand(popsize,1);   % 随机初始化粒子所在的位置在定义域内
end
v = -vmax + 2*vmax .* rand(popsize,dim);  % 随机初始化粒子的速度

%% 计算适应度
fit = zeros(popsize,1);  % 初始化这n个粒子的适应度全为0
for i = 1:popsize  % 循环整个粒子群,计算每一个粒子的适应度
    [fit(i),NET]= fitness(x(i,:),inputnum,hiddennum_best,outputnum,net,inputn,outputn,output_train,inputn_test,outputps,output_test);   % 调用函数来计算适应度
    eval( strcat('NETA.net',int2str(i),'=NET;'))
end 
pbest = x;   % 初始化这n个粒子迄今为止找到的最佳位置
ind = find(fit == min(fit), 1);  % 找到适应度最小的那个粒子的下标
gbest = x(ind,:);  % 定义所有粒子迄今为止找到的最佳位置
fit_gbest=fit(ind);  
eval( strcat('Net=NETA.net',int2str(ind ),';'));
eval( strcat('NETT=NETA.net',int2str(ind),';'));

%% 进化过程
for d = 1:maxgen  % 开始迭代,一共迭代K次
    for i = 1:popsize   % 依次更新第i个粒子的速度与位置
        v(i,:) = w*v(i,:) + c1*rand(1)*(pbest(i,:) - x(i,:)) + c2*rand(1)*(gbest - x(i,:));  % 更新第i个粒子的速度
        for j = 1: dim
            if v(i,j) < -vmax(j)
                v(i,j) = -vmax(j);
            elseif v(i,j) > vmax(j)
                v(i,j) = vmax(j);
            end
        end
        x(i,:) = x(i,:) + v(i,:); % 更新第i个粒子的位置
        for j = 1: dim
            if x(i,j) < lb(j)
                x(i,j) = lb(j);
            elseif x(i,j) > ub(j)
                x(i,j) = ub(j);
            end
        end
        [fit(i),NET] = fitness(x(i,:),inputnum,hiddennum_best,outputnum,net,inputn,outputn,output_train,inputn_test,outputps,output_test); 
        eval( strcat('NETA.net',int2str(i),'=NET;'))      [fit_pbest,~]=fitness(pbest(i,:),inputnum,hiddennum_best,outputnum,net,inputn,outputn,output_train,inputn_test,outputps,output_test);
        if fit(i) <fit_pbest    
           pbest(i,:) = x(i,:);   
           fit_pbest =fit(i);
           eval( strcat('NETT=NETA.net',int2str(i),';'))
        end
        %更新历史最优粒子位置
        if  fit_pbest < fit_gbest  
            gbest = pbest(i,:);   
            fit_gbest=fit_pbest;
            eval( strcat('Net=NETT;'));
        end
    end
    Convergence_curve(d) =fit_gbest;  % 更新第d次迭代得到的最佳的适应度
    waitbar(d/maxgen,h0,[num2str(d/maxgen*100),'%'])
    if getappdata(h0,'canceling')
        break
    end
end

%% 权重阈值更新
w1=Best_pos(1:inputnum*hiddennum_best); 
B1=Best_pos(inputnum*hiddennum_best+1:inputnum*hiddennum_best+hiddennum_best);  
w2=Best_pos(inputnum*hiddennum_best+hiddennum_best+1:inputnum*hiddennum_best+hiddennum_best+hiddennum_best*outputnum); 
B2=Best_pos(inputnum*hiddennum_best+hiddennum_best+hiddennum_best*outputnum+1:inputnum*hiddennum_best+hiddennum_best+hiddennum_best*outputnum+outputnum);   

%矩阵重构
net.iw{1,1}=reshape(w1,hiddennum_best,inputnum);
net.lw{2,1}=reshape(w2,outputnum,hiddennum_best);
net.b{1}=reshape(B1,hiddennum_best,1);
net.b{2}=reshape(B2,outputnum,1);

4. Simulation results

(1) According to the empirical formula, the optimal number of hidden layer nodes is obtained through the number of input and output nodes:

(2) Prediction comparison chart and error chart of PSO-BP and BP

  (3) Various error indicators and prediction accuracy of BP and PSO-BP

 (4) PSO fitness evolution curve of particle swarm optimization algorithm

  (5) Regression diagram of BP and PSO-BP models

  (6) Error histograms of BP and PSO-BP models

 Four. Conclusion

    It should be noted that the particle swarm algorithm PSO and BP neural network are both algorithms based on randomness, so the optimization results of the same set of parameters may be different, and repeated experiments are required to verify the robustness and reliability of the model .

Guess you like

Origin blog.csdn.net/baoliang12345/article/details/130494343