Summary of Particle Swarm Optimization Strategies

foreword

Based on the needs of the research topic, the optimization strategy of the particle swarm optimization algorithm is summarized. According to the papers read, the optimization of the particle swarm optimization algorithm is mainly in the following five aspects:

  • For the optimization of inertia weight w
  • Optimization for c1 and c2
  • population optimization
  • Optimization of speed update formula
  • Optimization of displacement update formula

This blog will summarize the optimization of these five aspects. For each specific optimization, it will summarize five aspects of optimization strategy, reference papers, principles, functions, and specific practices.

The particle swarm optimization strategy summarized in this blog is also applicable to other swarm intelligence optimization algorithms, and the summary of the optimization strategy will be continuously updated later.

1 Optimization of inertia weight w

1.1 Introduce chaotic Sine mapping to construct nonlinear random incremental inertia weight

Reference Paper – Research on Control Strategy of Improved Particle Swarm Algorithm MPPT Based on Chaos Map and Gaussian Perturbation

Note that this is a nonlinear random increasing inertia weight, which is different from a nonlinear decreasing inertia weight.

Non-linear decreasing inertia weight, the value of w in the early stage is larger, it has a strong global search ability, and the value of w in the later stage is smaller, it has a strong local search ability, while the nonlinear increasing inertia weight is the opposite, the value of w in the early stage is set to If the value is small, it has a strong local search ability, and if the value of w is large in the later stage, it has a strong global search ability.As a classic chaotic map, the chaotic Sine map has the advantages of good ergodicity, which increases the randomness of the algorithm, so that the algorithm has a strong local optimization ability in the early stage and a good global optimization in the later stage. ability. Generate (0, 1) random numbers through chaotic sine mapping, so that w grows more slowly, making it have stronger local development capabilities in the early stage, the improved inertia weight expression is as follows:

insert image description here

k is the current number of iterations; kmax is the maximum number of iterations; wk is the weight value at the kth iteration; wmax and wmin are the upper and lower limits of the inertia weight, which are 0.9 and 0.4 respectively; S(k) is the chaotic Sine map.

1.2 Using an exponential nonlinear decreasing inertia weight

Reference Paper – Improved Particle Swarm Optimization Algorithm Introducing Circle Mapping and Sine-Cosine Factors

The inertia weight value will decrease as the number of iterations increases, and it is a non-linear form, first fast and then slow, which fits the search characteristics of the algorithm that focuses on global exploration in the early stage and local development in the later stage, which is conducive to improving the algorithm convergence speed and accuracy.

insert image description here

t is the current number of iterations, and Tmax is the maximum number of iterations set:

1.3 Changing the inertia weight by strategy

Reference Paper – Multi-Strategy Co-Evolution Particle Swarm Algorithm Based on Cauchy Mutation

The core idea of ​​this optimization strategy is to divide the population into three parts, among which the large-scale search population uses strategy 1 to change the inertia weight, and the fine search population uses strategy 2 to change the inertia weight.The particles of different populations have different detection capabilities and development capabilities in the same period.

insert image description here

2 Optimization for c1 and c2

2.1 Introduce sine and cosine functions to construct nonlinear asynchronous learning factors

Reference Paper – LED Light Source Array Optimization Based on Improved Particle Swarm Optimization

When c1 is large and c2 is small, the particle swarm optimization algorithm has better global search ability, and when c1 is small and c2 is large, it has better local search ability.In order to control c1 to take a larger value and c2 to take a smaller value in the initial stage to enhance the global search ability, and let c1 take a smaller value and c2 take a larger value to enhance the local search ability in the final iteration stage, use sine and cosine functions to control c1 and c2, so that the value of c1 can decrease nonlinearly, and the value of c2 can increase nonlinearly.

However, since the particle swarm optimization algorithm in this paper is different from other particle swarm optimization algorithms, it is a nonlinear incremental inertia weight particle swarm optimization algorithm, that is, it has a strong local search ability in the early stage and a strong global search ability in the later stage. Therefore, unlike other particle swarm optimization algorithms, c1 is nonlinearly increased, and c2 is nonlinearly decreased, so that the algorithm has a strong local search ability in the early stage and a strong global search ability in the later stage. The specific improvement formula is as follows:

insert image description here

In the formula: wmax and wmin are the maximum and minimum values ​​of the inertia weight respectively; t and tmax are the current iteration number and the maximum iteration number respectively.

2.2 Introduce logarithmic function to construct nonlinear asynchronous learning factor

Reference Paper – Research on Control Strategy of Improved Particle Swarm Algorithm MPPT Based on Chaos Map and Gaussian Perturbation

When c1 is large and c2 is small, the particle swarm optimization algorithm has better global search ability, and when c1 is small and c2 is large, it has better local search ability. The logarithmic function is introduced to construct a nonlinear asynchronous learning factor to balance the global development ability and local search ability of the algorithm,In the initial stage, control c1 to take a larger value and c2 to take a smaller value to enhance the global search ability, and let c1 take a smaller value and c2 take a larger value to enhance the local search ability in the final iteration stage

However, the particle swarm optimization algorithm in this paper is to make the algorithm have a strong local development ability in the early stage and a strong global search ability in the later stage, so it is necessary to let c1 decrease faster in the early stage, and c2 grow faster in the early stage , so the value of c1 in the early stage must be small, and the value of c2 in the early stage must be large.

insert image description here

c1_max, c1_min, c2_max, c2_min are upper and lower limit values ​​of learning factors c1, c2, which are 2.1, 0.8, 2.1, 0.8 respectively.

2.3 Replace the learning factor in PSO with the sine term and cosine term in SCA and introduce the probability p

Reference Paper – Improved Particle Swarm Optimization Algorithm Introducing Circle Mapping and Sine-Cosine Factors

Inspired by the search mechanism and position update formula in SCA, the learning factor in PSO is replaced by the sine term and cosine term in SCA. And introduce the probability p. p takes a random number between [0,1]. When p<0.5, use the formula (15) to update the particle speed, otherwise use the formula (16) to update the particle speed. Among them, ω is determined by formula (14), which is the second method of inertial weight optimization, r1 is determined by formula (17), and r2 is a random number between [0,2π].

insert image description here

After introducing the sine and cosine factors, the learning factor is no longer a simple monotonically decreasing or monotonically increasing trend, but an oscillating overall attenuation trend between [−2,2]. Combined with p, the probabilistic switching of sine function and cosine function as learning factors enables each particle in the population to search and move around the optimal position of the individual and the optimal position of the population, which not only increases the diversity of particle exploration directions, but also Broaden the exploration space of particles. Combined with the exponential nonlinear decreasing inertia weight, a better balance is achieved in the global exploration and local development of the algorithm.

3 Population Optimization

3.1 Initialize the population using Circle mapping

Reference Paper – Improved Particle Swarm Optimization Algorithm Introducing Circle Mapping and Sine-Cosine Factors

Generally, the initial population is generated by random initialization operation, but the initial population obtained in this way has a fundamental limitation on the convergence performance of the algorithm because the distribution of individuals in it is not uniform. Based on the characteristics of randomness, ergodicity and regularity of the chaotic map, it meets the requirements of the initial population. After comparison and analysis,This article chooses to use the population initialization operation that introduces the Circle map to obtain a more uniform and diverse initial population, which is conducive to improving the convergence speed and accuracy of the algorithm.. The expression of the chaotic sequence generated by Circle mapping is shown in the following formula, where numi represents the i-th chaotic sequence number, and mod(a,b) represents the remainder operation of a to b.

insert image description here
Figure 1 shows the distribution diagrams of 1000 sequence values ​​generated by common random numbers, Logistic mapping, Tent mapping and Circle mapping respectively. Observing Figure 1, we can see that the distribution of chaotic sequence values ​​generated by Circle mapping between 0 and 1 is more uniform than the distribution of sequence values ​​generated by ordinary random numbers, Logistic mapping and Tent mapping. Providing a high-quality search space for the algorithm is conducive to improving the convergence accuracy of the algorithm.
insert image description here
This paper introduces the population initialization operation of the Circle map, including the initialization of the velocity and the initialization of the position, as shown in formula (12) and formula (13), respectively, vub and vlb are the upper and lower bounds of the particle velocity, xub and xlb are the particle The upper and lower bounds of the position, numi,j and num'i,j are the chaotic sequence values ​​generated by the Circle mapping on the corresponding particle dimension.

insert image description here

3.2 Elite Reverse Learning Strategy

Reference Paper 1 – A Mixed Strategy Improved Whale Optimization Algorithm
Reference Paper 2 – A Multi-Objective Evolutionary Algorithm Using Archives Elite Learning and Reverse Learning
Reference Paper 3 – A Particle Swarm Optimization Algorithm for Elite Reverse Learning

Because the quality of the initial population directly affects the quality of the subsequent iterative process of the algorithm, the high-quality population can effectively improve the convergence speed and accuracy in the iterative process, and the random number is used to randomly generate the initial population, which cannot guarantee the quality of the initial population. Therefore, in order to improve the quality of the initial population and avoid the slow convergence and premature phenomenon of the algorithm caused by the low quality of the initial population, the solution space is optimized by using the elite strategy and the reverse learning strategy.

The elite strategy is used to select individuals with high fitness to form a new population for iteration, and to accelerate convergence and improve accuracy by screening and discarding individuals with poor fitness; the reverse strategy is to generate a reverse solution, and compare and evaluate the reverse solution with the original Select better individuals to enter the next iteration. Experiments show that the reverse solution of most elite particles is closer to the optimal solution than the reverse solution of ordinary particles. Therefore, the introduction of the reverse solution of elite particles can broaden the activity area of ​​​​the group and improve Diversity is beneficial to avoid the algorithm from falling into local optimum.The core solution process is as follows:

a) Use the random strategy to initialize the whale population W, calculate and descend the fitness value, and screen the top n/2 whale individuals to form the elite population P; b) Calculate the
reverse solution of each whale individual in the elite population P to form the elite reverse population O;
c) Merge the initial populations W and O, and screen the top n individuals with higher fitness values ​​according to the sorting rule of fitness from high to low, and establish a new population N.

The relevant definitions and formulas are as follows:
insert image description here

3.3 Random search strategy with elimination

Reference Paper – Improved Particle Swarm Optimization Algorithm Introducing Circle Mapping and Sine-Cosine Factors

First, an adaptive random parameter A is set in the algorithm, and A is determined by formula (6). Every time the position of the particle is updated, the size of |A| is judged. When |A|>1, randomly select 20% of the population size particles from the current population to implement a random search strategy with an elimination system, that is, update the position of each selected particle according to formula (9), and calculate the corresponding fitness value. If there is a particle with a lower fitness value than the current swarm optimal, update the particle position to the swarm optimal position, and then use these randomly searched particle positions to replace those with lower fitness values ​​in the current population. The particle position is measured, and the reorganized population is regarded as the latest population after this iteration.

This paper proposes a random search strategy with an elimination system, through the operation of conditional random search and comparison to update the optimal position of the group,It can effectively increase the global exploration ability of the algorithm; combined with the elimination system, it can replace the position of some poor individuals in the population, which is beneficial to improve the quality of the population and increase the diversity of individuals in the population, and enhance the ability of the algorithm to jump out of the local optimum. To a greater extent, the algorithm can be guaranteed to converge to the optimal solution.
insert image description here
insert image description here

3.4 Multi-population strategy:

Reference Paper – Multi-Strategy Co-Evolution Particle Swarm Algorithm Based on Cauchy Mutation

In this paper, the particle swarm is mainly divided into two populations, the large-scale search population and the fine search population. The particles of different populations adopt different w change strategies, so that the particle swarms in the same period have different detection and development capabilities. .

insert image description here

4 Optimization of speed update formula

4.1 Adaptive Speed ​​Update Strategy

Reference Paper – Multi-Strategy Co-Evolution Particle Swarm Algorithm Based on Cauchy Mutation

In the standard PSO algorithm, even if a particle finds a better solution than the previous generation in this optimization, it will still change the optimization direction in the next optimization according to the speed update formula of the standard PSO algorithm, so that no distinction is made. To update the speed of particles uniformly will reduce the convergence speed of the algorithm. Therefore, an adaptive speed update strategy is proposed,Distinguish between finding better solution particles and ordinary particles to improve the convergence speed of the algorithm, the adaptive speed update strategy formula is as follows:

insert image description here

5. Optimization of displacement update formula

5.1 Add an adaptive parameter adjustment displacement formula

Reference Paper – Nonlinear Inertial Weighted Particle Swarm Algorithm with Filtering Mechanism

In order to ensure the rapidity of particles in the early stage of optimization and ensure that the particles do not diverge in the later stage of optimization, the formula of the standard particle swarm optimization algorithm is adjusted and an adaptive parameter is added to ensure the above requirements.

insert image description here
insert image description here

Guess you like

Origin blog.csdn.net/weixin_44049823/article/details/129432256