Excellent performance! |Gray wolf optimization algorithm integrating multiple strategies

The gray wolf optimization algorithm is a new SI algorithm proposed by Mirjalili et al. in 2014. GWO achieves the purpose of optimization by simulating the predatory behavior of gray wolf groups and based on the mechanism of wolf group collaboration. This mechanism achieves balance in exploration and development. It has achieved good results and has good performance in terms of convergence speed and solution accuracy. It has been widely used in engineering fields, such as neural networks, scheduling, control, power systems, etc.

However, GWO also has the disadvantage of premature convergence and is prone to stagnating at the local optimal solution. To solve this problem, there are several types of improvement directions for reference:

(1) Adjust control parameters

(2) Introduce new search strategies

(3) Mixed with other optimization algorithms

(4) Modify the wolf pack structure

Based on the above ideas, this paper designs a gray wolf optimization algorithm that integrates multiple strategies. The function test results show that the performance of the improved algorithm has been significantly improved.

00 Article Directory

1 Principle of Gray Wolf Optimization Algorithm

2 Improved gray wolf optimization algorithm

3 Code directory

4 Algorithm performance

5 Source code acquisition

6 Summary

01 Principle of Gray Wolf Optimization Algorithm

The principle of Gray Wolf optimization algorithm and its method of obtaining MATLAB code have been introduced in previous articles and will not be elaborated here.

02 Improved gray wolf optimization algorithm

2.1 Improved Tent chaos initialization

The initialization of a group's algorithm affects its search performance. Since there is no prior information, individuals are usually generated by random initialization. This strategy works in a sense. However, sometimes individuals are not uniformly distributed in the search domain, which may make individuals far away from the global optimal solution, resulting in lower convergence speed.

Chaos has the characteristics of ergodicity, randomness and regularity, and is a common phenomenon in nonlinear systems. Searching using chaotic variables obviously has greater advantages than disordered random search. Chaos disturbance equations commonly used in the literature include Logistic mapping and Tent mapping. It can be seen from the literature [3] that the distribution characteristics of logistic mapping are: the probability of the middle value is relatively uniform, but the probability is particularly high at both ends. Therefore, when the global optimal point is not at both ends of the design variable space, it is detrimental to finding the optimal point. The Tent chaos map has a simple structure and has better traversal uniformity and faster search speed than the Logistic chaos map. However, there are small periods and unstable period points in the Tent map iteration sequence. In order to avoid the Tent chaos sequence during iteration Falling into small periodic points and unstable periodic points, a random variable rand(0, 1) × 1 /N[1] is introduced into the original Tent chaotic map expression. Then the improved Tent chaos mapping expression is as follows:

Insert image description here

Where: N is the number of particles in the sequence. The introduction of the random variable rand(0, 1) /N not only maintains the randomness, ergodicity, and regularity of the Tent chaotic map, but also effectively prevents iterations from falling into small periodic points and unstable periodic points. The random variables introduced in the algorithm of this article not only maintain the randomness, but also control the random values ​​within a certain range, ensuring the regularity of Tent chaos.
Insert image description here

The figure shows the initial distribution of chaotic sequences in the two-dimensional area generated by Logistic, Tent and improved Tent chaos mapping. It can be observed that the distribution uniformity of the improved Tent chaos map is better, so this article uses improved Tent chaos to It replaces the random initialization of the sparrow search algorithm to improve the distribution quality of the initial population in the search space, strengthen its global search capability, and thereby improve the algorithm's solution accuracy.

2.2 Adaptive hunting weight coefficient

The main difference between GWO and other SI algorithms is its social leadership hierarchy, which can be of considerable value in improving GWO's search capabilities. The higher the level of the gray wolf, the deeper the understanding of the prey and the stronger the leadership ability. This relationship plays a vital role in group hunting during the search process. However, in the original GWO hunting (see the formula below), the weight coefficients of the three wolves are the same, which is obviously contradictory to the hierarchy of the real wolf pack.

Insert image description here

Inspired by the mass update formula in the gravity search algorithm, the following formula is introduced to measure the importance of the three leading wolves:

Insert image description here

θi is the corresponding weight of each wolf. In the formula, the closer the alpha wolf is to the prey (optimal), the higher the weight. The alpha wolf provides the main direction of movement for the gray wolf group, while the beta wolf and delta wolf provide auxiliary directions to speed up the surrounding prey. and attack.

2.3 Improve control parameters a

The parameter a in GWO controls the exploration and development process, which mainly affects the value of A. When | A | >1, the gray wolf group will expand the encirclement to find better prey, which corresponds to global search (exploration); When | A | < 1, the gray wolf group will shrink the encirclement circle to complete the final attack on the prey, which corresponds to local precision search (exploitation).

Insert image description here

Insert image description here

The change of parameter a is a controlling factor in the transition from exploration to development. The hunting process of gray wolves in nature is complex, so simple linear changes cannot effectively characterize their search process. This article uses changes in the sinusoidal form of a to improve the linear search process. Its expression is as follows:
Insert image description here

Among them, t represents the current iteration number, and max_iter represents the maximum iteration number.

The figure below shows a comparison of this nonlinear function with the linear function in the standard GWO. It can be seen from the figure that in the early iteration of the nonlinear function in this article, the value of a is wider, and the range it can be used for exploration is also wider; in the later stage of the iteration, a is smaller, which helps the algorithm to develop locally and accelerates its convergence. speed.

Insert image description here

2.4 Improved alpha wolf position update method

Alpha, beta and delta wolves represent the evolutionary direction of the population in GWO and play a vital role in guiding the search direction. However, in traditional GWO, the position updates of all gray wolves rely on the same mechanism, without taking into account the special status of the alpha wolf, and according to the actual situation, individual gray wolves only accept guidance from wolves with higher levels. However, in the algorithm It does not make sense that alpha, beta, and delta wolves would be led by wolves of lower status than themselves. In addition, the search performance of the GWO algorithm is limited, because when all wolves are attracted together by the alpha wolf, the population diversity quickly deteriorates, and GWO will converge prematurely at this time. To solve this problem, separate update strategies are used for α, β and δ wolves:

(1) 狼

Delta wolves will take leadership from alpha and beta wolves and will be updated as follows:

Insert image description here

Insert image description here

where ρ is a random number distributed in [0, 1]. Random numbers can enable GWO to express more randomness throughout the optimization process, which is helpful for global exploration.

(2) Beta wolf

The β wolf will accept the leadership of the α wolf. This article refers to the spiral update mechanism of the whale algorithm to make it approach the α wolf in a spiral motion. The update method is as follows:
Insert image description here

Among them, ρ is a random number distributed in [0, 1]. The introduction of random numbers also enhances the exploration ability of beta wolves.

(3) a狼

The alpha wolf has the highest level among the wolves and should not be guided by other wolves, so a random walk strategy is introduced to update the alpha wolf. Since Levy's flight mechanism features short-distance exploration and long-distance hopping, short-distance exploration and occasional longer-distance hopping are occasionally switched. The short-distance explorability ensures that the Alpha Wolf searches around itself, increasing the speed and accuracy of optimization. The occasional long-distance jump can expand the Alpha Wolf's search area and make the search more extensive.
Insert image description here

It can be seen from the figure that the Levy distribution integrates the characteristics of the small step perturbation of the Gaussian distribution and the large step perturbation of the Cauchy distribution, so it can improve the exploration and development capabilities of alpha wolves. At the same time, considering the randomness of the Levy mechanism, we draw on the "greedy" selection idea to implement a selection mechanism of survival of the fittest. Introducing Levy flight mechanism:
Insert image description here

Levy (λ) is a random search path, ➕ represents dot product, α is the step control factor, generally 0.01.

Since the Levy distribution is very complex and cannot be realized, the Mantegna algorithm is currently commonly used to simulate its flight trajectory, and its mathematical expression is as follows:

Insert image description here

Among them, the relationship between parameter c and l in Levy (l) ~t^-l is l=1+c, and 0<c£ 2, m and u both obey normal distribution, and are defined as follows:

Insert image description here

Among them, the variances sm and su are determined by:
Insert image description here

In the formula, G is the gamma function, and the constant c is generally 1.5

The position update formula of α wolf is as follows:
Insert image description here

Among them, rand4 represents a random variable between [0, 1], p is the selection probability of survival of the fittest, and f (⋅) is the fitness value of the individual. It can be seen from the above formula that using this strategy can make the population evolve in the optimal direction and effectively improve the search efficiency of the algorithm.

03 Code directory

Insert image description here

Among them, Main_MSGWO.m is the main program, with detailed code comments. You can get all the results by running Main_MSGWO with one click.

The running results include: chaos sequence comparison chart, control parameter comparison chart, levy and Gaussian, Cauchy distribution comparison chart, and iteration chart of the algorithm on each test function. Finally, an excel table will be generated, including the average of n iterations of the algorithm on each function. value, mean, running time, optimal solution. At the same time, the garbled files are also resolved, and the txt file of the main code is given.

Part of the code:
Insert image description here
Insert image description here

Insert image description here
Insert image description here

Generate excel file: (12345 corresponds to MSGWO, GWO, WOA, PSO, and GA respectively)

04 Algorithm performance

The test function of CEC is used to preliminarily test its optimization performance. The running results are as follows:
Insert image description here

It can be seen from the results that, except for a few functions, the improved gray wolf optimization algorithm has better convergence speed and accuracy in most functions, and the improved algorithm has good effects.

05 Source code acquisition

You can send a private message to the author or follow the author’s public account: KAU’s Cloud Experiment Bench

06 Summary

The improved gray wolf optimization algorithm proposed in this article has good performance and is effective in improvement. At the same time, the improvement strategy in this article can also be generalized, such as introducing the leading wolf mechanism of the gray wolf algorithm into other algorithms. The algorithm in this article can also be further improved, such as introducing the idea of ​​food sources in artificial bee colonies. If the algorithm encounters evolutionary stagnation, it can be supplemented by perturbation, which will presumably further improve performance.

references

[1] Zhang Na, Zhao Zedan, Bao Xiaoan, et al. Improved Tent chaotic gravity search algorithm [J]. Control and Decision, 2020, 35(4): 893-900.

[2]Long, W., Jiao, J., Liang, X., & Tang, M. (2018). Inspired grey wolf optimizer for solving large-scale function optimization problems. Applied Mathematical Modelling, 60,112–126.

[3]Miao,Zhaoming等.Grey wolf optimizer with an enhanced hierarchy and its application to the wireless sensor network coverage optimization problem[J].APPLIED SOFT COMPUTING,2020,96.

Another note: If anyone has optimization problems to be solved (in any field), you can send them to me, and I will selectively update articles that use optimization algorithms to solve these problems.

If this article is helpful or inspiring to you, you can click Like/Reading (ง•̀_•́)ง in the lower right corner (you don’t have to click)

Guess you like

Origin blog.csdn.net/sfejojno/article/details/133869454