Particle Swarm Optimization Algorithm and MATLAB Realization

The last blog was about the ant colony optimization algorithm. If you are interested, you can read
https://blog.csdn.net/HuangChen666/article/details/115913181
1. Overview of particle swarm optimization algorithm
2. Particle swarm optimization algorithm solution
     2.1 Continuous solution space problem
     2.2 Component elements
     2.3 Algorithm process description
     2.4 Particle velocity update formula
     2.5 Velocity update parameter analysis
3. Summary of particle swarm optimization algorithm
4. MATLAB code

1. Overview of particle swarm optimization algorithm

Particle swarm optimization algorithm is a heuristic search algorithm based on population optimization. It was first proposed by Kennedy and Eberhart in 1995.
Its main inspiration comes from the study of group movement behavior of birds. We can often observe the synchronicity shown by the flock of birds. Although the movement behavior of each bird is independent , it shows a high degree of consistency and complex behavior during the flight of the whole flock, and can be adaptive Adjust the flight status and trajectory.
The reason why flocks of birds have such complex flight behaviors may be that each bird follows certain behavioral rules during flight and can grasp the flight information of other birds in the neighborhood.
The particle swarm optimization algorithm draws on the idea that each particle represents a potential solution in the search solution space of the problem to be solved, which is equivalent to a flight information, including the current position .
Each Particles can obtain the information of other individuals in their neighborhood, evaluate the passing position, and change their two state quantities according to the information and position and speed update rules, and transmit information and communicate with each other during the "flying" process. , to better adapt to the environment. As this process continues, the particle swarm can finally find an approximate optimal solution to the problem.

2. Particle swarm optimization algorithm solution

Particle swarm optimization algorithm is generally suitable for solving problems in continuous solution space, such as searching in the solution space through particle swarm to find the maximum value.

insert image description here

2.1 Continuous solution space problem

The figure above is a typical case of particle swarm optimization algorithm to solve the extreme value. It can be seen that there are four particles at the beginning. The solution process can be understood as the four particles are constantly moving closer to the largest particle, and constantly update their own in the process of moving closer. The maximum value and the overall maximum value change the speed of their own movement under the influence of their own maximum value and the overall maximum value, and finally all particles reach the same extreme value.

2.2 Components

1. Particle swarm

  • Each particle corresponds to a feasible solution of the problem to be solved,
    that is, each particle itself is a feasible solution

  • Particles are represented by their position and speed.
    Particles in the code are represented by position and speed, that is, the abscissa represents the position of the particle, and the speed represents the next movement trend of the particle.
    xn ( i ) x_n^{(i)}xn(i)Indicates the position of particle i in the nth round
    vn ( i ) v_n^{(i)}inn(i)Indicates the velocity of particle i in the nth round

2. Record

  • p b e s t ( i ) p_{best}^{(i)} pbest(i)Indicates the historical best position of particle i
  • g b e s t ( i ) g_{best}^{(i)} gbest(i)Indicates the best position in global history

3. Function to calculate fitness

  • Fitness: f ( x ) f(x)f ( x ) is the function expression

2.3 Algorithm process description

1. Initialization
  • Initialize the particle swarm: the position and velocity of each particle, the position is the initial xx of each particleThe x coordinate, the speed indicates that the particle will be xxin the next roundThe change value of the x coordinate, which can be positive or negative, that is,x 0 ( i ) x_0^{(i)}x0(i)Sum v 0 ( i ) v_0^{(i)}in0(i)
  • Initialize the historical best position pbest of particle i ( i ) p_{best}^{(i)}pbest(i)And the best position of the global particle history gbest ( i ) g_{best}^{(i)}gbest(i) p b e s t ( i ) p_{best}^{(i)} pbest(i)The initial value of is assigned with a random number, gbest ( i ) g_{best}^{(i)}gbest(i)Set to an infinitesimal value (because the maximum value is taken as an example here)

2. Perform the following three steps in a loop until the end condition is met

  • Calculate the fitness (i.e. function value) of each particle: f ( xn ( i ) ) f(x_n^{(i)})f(xn(i))
  • Update the best fitness of each particle history and its corresponding position, and update the current global best fitness and its corresponding position
  • Update the speed and position of each particle
    insert image description here
    The update of the particle position is to update the abscissa of each particle on the x-axis (in fact, each particle is a number on the abscissa), and the position of the next round is equal to the position of the previous round Plus the speed change is multiplied by a unit time, so the multiplication by 1 here is not written.

2.4 Interpretation of Particle Velocity Update Formula

insert image description here
insert image description here
It can be seen from the formula that the speed of the particle in the next round = the speed of the particle in the previous round + the tendency to return to its best position in history + the tendency to go to the best position in the world, that is, inertia item + memory item + social item.
In general, the relationship between a variable and other variables is determined. The following is the parameter setting. There are two pairs of parameters ck and rk c_k and r_kckand rk c k c_k ckIs the weight parameter, the general value is 2, in fact it affects the speed of optimization, rk r_krkis a random parameter, that is, a random number between 0 and 1.

2.5 Analysis of speed update parameters

insert image description here
The weight parameter mainly affects the speed of the particle flight, in the future use, generally set c 1 and c 2 c_1 and c_2c1and c2There are many equal cases.

3. Improvement of particle swarm optimization algorithm

With the widespread use of particle swarm optimization, it is found that if an inertia weight is added, the optimization effect is better.
insert image description here
introduced a wwwThe w parameter controls the influence of the previous particle speed on the next round of particle speed to adapt to different scenarios.

4. MATLAB code

Find the maximum value of f= xsin(x)cos(2x) - 2xsin(3x) on [0,20]
insert image description here
Because there are multiple peaks here, it will be better to set the weight parameter c2>c1.
Code reference https://www.pianshen.com/article/2364328713/

clc;clear;
%% 初始化参数
f= @(x)x .* sin(x) .* cos(2 * x) - 2 * x .* sin(3 * x);
pnum=50;            %粒子个数
iter=100;           %迭代次数
w=0.8;              %惯性权重
c1=0.8;             %权重参数c1
c2=1.2;             %权重参数c2
xlimit=[0,20];      %位置限制
vlimit=[-1,1];      %速度限制
figure(1);ezplot(f,[xlimit(1),0.01,xlimit(2)]);
Px=((xlimit(2)-xlimit(1))*rand(pnum,1))+xlimit(1);      %随机产生粒子的初始位置
Pbest=Px;                       %粒子i历史上的最好位置
Gbest=[-inf,-inf];              %全局历史上的最好位置
Pymax=ones(pnum,1)/-eps;        %粒子i历史上的最大值
Pymin=ones(pnum,1)/eps;         %粒子i历史上的最小值
Pv=zeros(pnum,1);               %初始化粒子速度
Py=f(Px);                      %计算粒子适应度
hold on;
plot(Px, Py, 'ro');title('初始状态图');
figure(2);
max_record=zeros(pnum,1);
%% 迭代求解
for i=1:iter
    Py=f(Px);          %计算粒子适应度
    %更新Pbest和Gbest,粒子位置
    for j=1:pnum
        if Py(j)>Pymax(j)
           Pymax(j)=Py(j);
           Pbest(j)=Px(j);
        end
    end
     % 全局最好的位置
    if Gbest(1)<max(Pymax)
        [Gbest(1),max_index]=max(Pymax);
        Gbest(2)=Pbest(max_index);
    end
    max_record(i)=Gbest(1);
    % 更新速度和位置
    Pv=Pv*w+c1*rand*(Pbest-Px)+c2*rand*(repmat(Gbest(2),pnum,1)-Px);
    Pv(Pv>vlimit(2))=vlimit(2);
    Pv(Pv<vlimit(1))=vlimit(1);
    Px=Px+Pv;
    Px(Px>xlimit(2))=xlimit(2);
    Px(Px<xlimit(1))=xlimit(1);
    x0 =xlimit(1):0.01:xlimit(2);
    plot(x0, f(x0), 'b-', Px, f(Px), 'ro');title('状态位置变化')
    pause(0.1);
end

%% 得出结果
figure(3);plot(max_record);title('收敛过程');
disp(['最大值:',num2str(Gbest(1))]);
disp(['最大位置:',num2str(Gbest(2))]);

Guess you like

Origin blog.csdn.net/HuangChen666/article/details/116030184