Popular understanding of particle swarm optimization algorithm

Main content: Introduction to particle swarm optimization algorithm

1

Background introduction

Artificial life


Artificial life: the study of artificial systems with certain basic characteristics of life. It includes two aspects:
  1. Research on how to use computational technology to study biological phenomena;
  2. Research on how to use biological technology to study computational problems.
  We are concerned about the second point. There are many computational techniques derived from biological phenomena, such as neural networks and genetic algorithms. Now discuss another biological system---social system: a community and environment composed of simple individuals, and the mutual behaviors between individuals.

Swarm intelligence


The simulation system uses local information to produce unpredictable group behavior. We can often see flocks of birds, fish or plankton. The gathering behavior of these creatures helps them to forage and escape from predators. Their communities are often tens, hundreds, thousands, or even tens of thousands, and there is often no unified commander. How do they accomplish these functions of gathering and moving?

  When Millonas developed the artificial life algorithm (1994), he proposed the concept of swarm intelligence and put forward five principles:
  1. Proximity principle: the group should be able to realize simple space-time calculations;
  2. The principle of quality: the group can respond to environmental elements;
  3 Principles of response to changes: Groups should not limit their activities to a narrow range;
  4. Principles of stability: Groups should not change their models with the environment every time;
  5. Principles of adaptability: The model of a group should be worth the cost Time to change.

Simulation group


Simulation of bird flock behavior: Reynolds, Heppner and Grenader proposed the simulation of bird flock behavior. They found that the bird flock would suddenly change direction, spread out, or gather together while traveling. Then there must be some potential ability or rule to ensure these synchronized behaviors. These scientists all believe that the above behavior is based on the group dynamics in the unpredictable social behavior of birds. In these early models, the operation only relied on the distance between individuals, that is to say, this synchronization is the result of efforts to maintain the optimal distance between individuals in the flock.

  The study of fish school behavior: Biosociologist EOWilson conducted a study of fish school. It puts forward: "At least in theory, individual members of a fish school can benefit from the discoveries and previous experiences of other individuals in the group in the process of searching for food. This benefit exceeds the consumption of benefits brought about by competition between individuals. Regardless of the unpredictable dispersion of food resources at any time.” This shows that the social sharing of information between the same species can bring benefits. This is the basis of PSO.

2

Algorithm introduction

The basic idea of ​​particle swarm optimization algorithm is to find the optimal solution through collaboration and information sharing between individuals in the group.
  The advantage of PSO is that it is simple and easy to implement and does not have many parameter adjustments. It has been widely used in the application fields of function optimization, neural network training, fuzzy system control and other genetic algorithms.

Questions raised


Imagine a scenario where a group of birds is randomly searching for food. There is only one piece of food in this area, and all the birds do not know where the food is. But they know how far they are from the food. So what is the optimal strategy for finding food? The simplest and most effective is to search the area around the bird that is currently closest to the food.

Problem abstraction


Birds are abstracted as particles (points) without mass and volume, and extend to N-dimensional space. The position of particle I in N-dimensional space is expressed as a vector Xi = (x1, x2,..., xN), and the flying speed is expressed as a vector Vi = (V1, v2,..., vN). Each particle has a fitness value (fitness value) determined by the objective function, and knows the best position (pbest) it has found so far and its current position Xi. This can be regarded as the particle's own flight experience. In addition, each particle also knows the best position (gbest) found by all particles in the entire group so far (gbest is the best value in pbest). This can be regarded as the experience of the particle companion. Particles use their own experience and the best experience of their companions to determine the next movement.

Algorithm Description


PSO is initialized as a group of random particles (random solution). Then find the optimal solution through iteration. In each iteration, the particle updates itself by tracking two "extreme" (pbest, gbest).

  After finding these two optimal values, the particle uses the following formula to update its speed and position.
Popular understanding of particle swarm optimization algorithm

Algorithm optimization


In 1998, Shi et al. published a paper "A modified particle swarm optimizer" at the International Conference on Evolutionary Computing, which revised the previous formula. Introduce inertia weight factor. The larger the value, the strong global search ability, but the weak local search ability; the smaller the value, the opposite.

  Initially, shi will be taken as a constant. Later experiments found that dynamics can obtain better optimization results than fixed values. The dynamics can change linearly during the PSO search process, or dynamically change according to a measurement function of the PSO performance. Currently, the linearly decreasing weight (LDW) strategy suggested by Shi is more widely adopted.

The flow of the standard PSO algorithm:


  Step1: Initialize a group of particles (group size is m), including random position and velocity;

  Step2: Evaluate the fitness of each particle;

  Step3: For each particle, compare its fitness value with the best position pbest it has passed. If it is better, use it as the current best position pbest;

  Step4: For each particle, compare its fitness value with the best position gbest it has passed. If it is better, use it as the current best position gbest;

  Step5: Adjust particle speed and position according to formula (2) and (3);

  Step6: If the end condition is not reached, go to Step2.

  The iteration termination condition is generally selected as the maximum number of iterations Gk or (and) the optimal position of the particle swarm so far searched to meet the predetermined minimum adaptation threshold according to the specific problem.

Parametric analysis


Popular understanding of particle swarm optimization algorithm

It is called the local PSO algorithm. Since there is no information exchange between individuals, the entire group is equivalent to a blind random search of multiple particles, and the convergence speed is slow, so the possibility of obtaining the optimal solution is small.

  The group size m is generally 20-40, and it can be 100-200 for difficult or specific problems.

  The maximum speed Vmax determines the resolution (or accuracy) of the area between the current position and the best position. If it is too fast, the particle may cross the minimum point; if it is too slow, the particle cannot explore enough beyond the local minimum point and will fall into the local extreme value area. This limitation can achieve the purpose of preventing calculation overflow and determining the granularity of the problem space search.

  The weighting factors include inertia factors and learning factors c1 and c2. The particles maintain the inertia of motion, so that they have a tendency to expand the search space and have the ability to explore new areas. C1 and c2 represent the weight of the statistical acceleration term that pushes each particle to the Pbest and gbest positions. A lower value allows the particles to hover outside the target area before being pulled back, and a higher value causes the particles to suddenly rush toward or over the target area.

3

Optimize PSO

Introduce the convergence factor, do not need the inertia weight
usually set c1 = c2 = 2. Suganthan's experiment shows that a better solution can be obtained when c1 and c2 are constants, but not necessarily equal to 2. Clerc introduces a constriction factor K to ensure convergence.
Popular understanding of particle swarm optimization algorithm
Usually set to 4.1, then K=0.729. Experiments show that compared with the PSO algorithm using inertia weight, the PSO using the convergence factor has a faster convergence speed. In fact, as long as c1 and c2 are selected appropriately, the two algorithms are the same. Therefore, the PSO using the convergence factor can be regarded as a special case of using the inertia weight PSO. Appropriate selection of parameter values ​​of the algorithm can improve the performance of the algorithm.

Discrete binary particle swarm


Basic PSO is used in real-valued continuous space, but many practical problems are combinatorial optimization problems, so discrete PSO is proposed. The speed and position update formula is:
Popular understanding of particle swarm optimization algorithm

PSO and GA comparison


Common features: (1) They are all bionic algorithms. (2) All belong to the global optimization method. (3) All belong to random search algorithm. (4) Both imply parallelism. (5) Search based on individual adaptation information, so it is not restricted by function constraints, such as continuity and divergence. (6) For high-dimensional complex problems, the shortcomings of premature convergence and poor convergence performance are often encountered, and the convergence to the best point cannot be guaranteed.

  Differences: (1) PSO has memory, and all particles of good solution knowledge are preserved, while in GA, the previous knowledge is changed as the population changes. (2) The particles in PSO only share information through the current search to the best point, so to a large extent this is a single shared information mechanism. In GA, chromosomes share information with each other, making the entire population move to the optimal region. (3) GA's coding technology and genetic operation are relatively simple. Compared with GA, PSO has no crossover and mutation operations. The particles are only updated through internal speed, so the principle is simpler, the parameters are fewer, and the implementation is easier.

  GA can be used to study three aspects of NN: network connection weight, network structure, and learning algorithm. The advantage is that it can handle problems that traditional methods cannot handle, such as non-directed node transfer function or no gradient information. Disadvantages: The performance is not particularly good on some problems; the coding of network weights and the selection of genetic operators are sometimes troublesome. PSO has been used for neural network training. Research shows that PSO is a kind of neural network algorithm with great potential. It is faster and has better results. And there are no problems encountered by genetic algorithms.

4

PSO implementation

Popular understanding of particle swarm optimization algorithm

The problems corresponding to each algorithm are as follows:


  PSO uses basic particle swarm algorithm to solve unconstrained optimization problem
  YSPSO uses particle swarm algorithm with compression factor to solve unconstrained optimization problem
  LinWPSO uses linearly decreasing weight particle swarm optimization algorithm to solve unconstrained optimization problem
  SAPSO uses adaptive weight particle swarm optimization algorithm to solve nothing Constrained optimization problem
  RandWPSO uses random weighted particle swarm optimization algorithm to solve unconstrained optimization problems
  LnCPSO uses particle swarm optimization algorithm with learning factors to change synchronously to solve unconstrained optimization problems
  AsyLnCPSO uses particle swarm optimization algorithm to change learning factors asynchronously to solve unconstrained optimization problems
  SecPSO Second-order particle swarm optimization algorithm to solve unconstrained optimization problems
  SecVibratPSO uses second-order oscillating particle swarm optimization algorithm to solve unconstrained optimization problems
  CLSPSO uses chaotic particle swarm optimization algorithm to solve unconstrained optimization problems
  SelPSO uses selection based particle swarm optimization algorithm to solve unconstrained optimization Ask
  BreedPSO to solve unconstrained optimization
  problems with particle swarm optimization algorithm based on cross-genetics. Ask SimuAPSO to solve unconstrained optimization problems with particle swarm optimization algorithm based on simulated annealing

csdn link (contains basic PSO and 12 optimized PSO algorithms, absolutely usable).

References: Courseware for Teacher Yao Xinzheng of Xidian

Recommended reading:

Selected dry goods|Summary of
dry goods catalog for the past six months Dry goods|Master the optimization of the mathematical foundation of machine learning [1] (Key knowledge)
[Intuitive detailed explanation] What is PCA and SVD

      欢迎关注公众号学习交流~         

Popular understanding of particle swarm optimization algorithm
Welcome to join the exchange group to exchange learning~
Popular understanding of particle swarm optimization algorithm

Guess you like

Origin blog.51cto.com/15009309/2553993