Super detailed | particle swarm optimization algorithm and its MATLAB implementation

This article mainly introduces the background and theory of the particle swarm optimization algorithm, and explains its implementation process in conjunction with the corresponding part of the MATLAB program. See the end of the article for the code acquisition method.

00 Article Directory

1 Particle swarm optimization algorithm
2 Problem introduction
3 MATLAB program realization
4 Improvement strategy
5 Outlook

01 Particle swarm optimization algorithm

1.1 Background of particle swarm optimization algorithm

In recent years, people have proposed a new type of biologically inspired computing method - swarm intelligence optimization algorithm by simulating the behavior of social biological groups. The core idea of ​​the swarm intelligence optimization algorithm is mainly derived from a special phenomenon in nature or a special behavior of group animals, especially the highly intelligent coordination and cooperation mechanism between biological groups, such as the coordinated movement of birds and fish ( See Figure 2.8), the migration of geese groups, the cooperative work of ant groups, and the coordinated hunting of wolves. Once the swarm intelligence optimization algorithm was proposed, it attracted extensive attention from researchers in many disciplines, and became a hotspot and frontier field of interdisciplinary subjects such as artificial intelligence, society, economy, and biology.

insert image description here

The picture comes from the Internet, invaded and deleted

Particle swarm optimization (PSO) was proposed by American social psychologist Kennedy and electrical engineer Eberhart in 1995 [1]. The main idea comes from the study of bird group behavior. Their models and simulation algorithms are mainly The model proposed by the biologist Heppner is used [2].

The PSO algorithm solves the problem by initializing a set of random solutions and searching for the optimal value iteratively. In the PSO algorithm, the solution of each optimization problem is regarded as a bird in the search space, which is called "particle". All the particles correspond to the fitness value of the optimization problem. The speed of the particles determines the direction and distance of their flight. The particles complete the search in the solution space by pursuing the optimal particles in the group. Since the PSO algorithm was put forward, it has attracted the attention and research of many scholars in related fields at home and abroad because of its simple calculation, easy implementation, and few control parameters.

The research on the PSO algorithm is mainly reflected in three aspects: one is the research on the theory of the PSO algorithm, the other is the research on the performance improvement of the PSO algorithm, and the third is the application of the PSO algorithm in various fields. Judging from the current research situation, the research on the latter two aspects accounts for the vast majority.

1.2 Standard particle swarm optimization algorithm

The basic idea of ​​the PSO algorithm is to find the optimal solution through the cooperation and information sharing among individuals in the group. It is a bionic intelligent computing method based on group intelligence. First, a group of particles without volume and mass is randomly initialized, and each particle is regarded as a feasible solution to the optimization problem. The quality of the particle is determined by a preset fitness function. Each particle will move in the feasible solution space, and its direction and distance will be determined by a velocity variable. Usually the particles will follow the current optimal particle, and the optimal solution will be obtained after generation-by-generation search. In each generation, the particle will track two optimal solutions: one is the optimal solution found so far by the particle itself; the other is the optimal solution found so far by the entire population.

Suppose a group of n particles is flying at a certain speed in a D-dimensional search space. Particle i state attributes are set as follows:
position (that is, represents the decision variable):insert image description here

Among them, the value of each dimension of the particle should be within the upper and lower limits of its search space, and the fitness value corresponding to each particle position Xi can be calculated according to the objective function.
speed:insert image description here

Individual extremum:insert image description here

Population extremum of the population:insert image description here

Among them, i=1,2,...,N
During each iteration, the particle updates its own speed and position through the individual extremum and group extremum, namely:insert image description here

In the formula, w is the inertia weight; d=1,2,...,D; i=1,2,...,n; k is the current number of iterations; Vid is the particle velocity; r1, r2 are the interval (0,1) Random numbers uniformly distributed within; c1 and c2 are called individual cognition factors and social learning factors respectively (the two are collectively called acceleration factors), and c1=c2 is usually taken. In order to prevent the blind search of particles, it is generally recommended to limit their position and velocity within a certain range.
For the above formula:insert image description here

The right side is composed of three parts:
the first part is the "inertia" part, which reflects the particle's "habit" of motion, which means that the particle has a tendency to maintain its previous speed; the second part
is the "cognitive" part, which reflects the particle's own history. The memory of experience represents that the particle has a tendency to approach the best position in its own history; the
third part is the "society" part, which reflects the group historical experience of cooperation and knowledge sharing among particles, and represents the particle's oriented group or neighborhood history. The best position is approaching.

From this point of view, the particle swarm optimization algorithm has good versatility, is suitable for dealing with various types of objective functions and constraints, and is easy to combine with traditional optimization methods, so as to improve its own limitations and solve problems more efficiently.

02 Question import

Introduce a multimodal nonlinear function to verify the performance of the PSO algorithm, the function is as follows:insert image description here

Its image is as follows:insert image description here

Its limit position is to obtain the maximum value near (0,0), and the maximum value is 1.0054

03 MATLAB program realization

The execution flow of the particle swarm algorithm is as follows, and the coding is performed according to the idea in the figureinsert image description here

Among them, this paper adopts a linearly decreasing weight strategy in the inertia weight part, because Shi et al. [3] pointed out the role of inertia weight, that is, a larger inertia weight is conducive to global optimization, while a smaller inertia weight is conducive to local optimization. excellent. Therefore, if the inertia weight decreases linearly during the iterative calculation process, for example, from 0.9 to 0.4, etc., the PSO algorithm has good global search performance at the beginning, and can quickly locate the area close to the global optimum point, and has good performance in the later period. The local search performance can accurately obtain the global optimal solution. The linear decreasing formula is as follows:insert image description here

In the formula, tmax is the maximum number of iterations; t is the current number of iterations; wstrat is the initial inertia weight; wend is the termination inertia weight.

3.1 Initialization

Initialization includes setting particle swarm parameters and initializing particle position and velocity.insert image description here

3.2 Evaluating Particles

Finding the optimal particle and assigninginsert image description here

3.3 Update particles and iterate

insert image description here

3.4 Running results

insert image description here

The optimal fitness and optimal position obtained by running are:insert image description here

It can be seen that the limit value is almost taken.

04 Improvement strategy

In 1999, Clerc introduced the compression factor into the evolutionary equation to ensure the convergence of the algorithm, and at the same time relax the speed limit, so that the PSO algorithm has a better convergence speed; Angeline used selection and hybridization in evolutionary calculations in 1998 and
1999 In 2001, Lovbjerg et al . introduced the
subgroup concept in the genetic algorithm into the PSO algorithm, and at the same time introduced the breeding operator to carry out the information exchange macro of the subgroup;
In 2005, Dou Quansheng and others introduced the two mechanisms of simulated annealing and division of labor into the PSO algorithm in order to enhance the ability of particle swarm optimization
; And introduced the elite group strategy, put forward the simple PSO algorithm;
Liu Hongbo and others analyzed the convergence of the PSO algorithm in 2006 and used the chaotic characteristics to improve the diversity and search ergodicity of the group, and improved the continuous search of particles. ability;

The particle swarm code acquisition method in this paper,

##########################################################
微信公众号 KAU的云实验台 回复 PSO
##########################################################

The author of these improvement methods will also be updated in subsequent articles. If this article is helpful or inspiring to you, you can like it or follow it (ง •̀_•́)ง (no point is fine)

references

[1] Ozcan E,Mohan C. Particle swarm optimization:Surfing the waves[C].Proceedings of 1999 Congresson Evolutionary Computation,1999,1939~1943.
[2] Heppner F , Grenander U. A stochastic nonlinear model for coordinated bird flocks[ M]. AAASPublications,1990.
[3] Shi Y, Eberhart R. A modified particle swarm optimizer [C]. In: Proceedings ofIEEE International Conference on Evolutionary Computation, 1998: 69-73.

Guess you like

Origin blog.csdn.net/sfejojno/article/details/131455568