Original! —Adaptive sparrow search algorithm GWHASSA with hybrid gray wolf hierarchy

The Sparrow Search Algorithm (SSA) is a swarm intelligence optimization algorithm proposed by Xue Jiankai et al. [1]. It is an algorithm inspired by the behavior of sparrows in foraging and avoiding predators. This algorithm was first proposed in 2020. It has the characteristics of strong local search ability and few adjustment parameters. It has been successfully applied to practical problems such as on-site inspection of CT images, optimized identification of battery stack parameters, and parameter optimization of machine learning algorithms.

However, the Sparrow search algorithm still has problems such as weak global search ability, easy to fall into local optimality, and dependence on the initial solution when facing complex optimization problems. In order to overcome the above shortcomings, Kaka (I) proposed an adaptive sparrow search algorithm with a hybrid gray wolf hierarchy in this paper.

00 Catalog

1 Principle of Sparrow search algorithm

2 Improved Sparrow Search Algorithm

3 Code directory

4 Algorithm performance

5 Source code acquisition

01 Principle of Sparrow Search Algorithm

The principle of the Sparrow search algorithm and its MATLAB code have been described in Kaka's previous articles. I will not explain them here. Interested friends can find my previous articles.

02 Improved Sparrow Search Algorithm

2.1 Chaos initialization strategy

The initialization of a group's algorithm affects its search performance. Since there is no prior information, sparrows in SSA are usually generated by random initialization. This strategy works in a sense. However, sometimes the distribution of sparrows in the search domain is not uniform, which may push the sparrows away from the global optimal solution, resulting in a slower convergence rate. Chaos has the characteristics of ergodicity, randomness and regularity, and is a common phenomenon in nonlinear systems.

Chaos disturbance equations commonly used in the literature include Logistic mapping and Tent mapping. It can be seen from the literature [2] that the distribution characteristics of logistic mapping are: the probability of the middle value is relatively uniform, but the probability is particularly high at both ends. Therefore, when the global optimal point is not at both ends of the design variable space, it is detrimental to finding the optimal point. The Tent chaos map has a simple structure and has better traversal uniformity and faster search speed than the Logistic chaos map. However, there are small periods and unstable period points in the Tent map iteration sequence. In order to avoid the Tent chaos sequence during iteration Falling into small periodic points and unstable periodic points, a random variable rand(0, 1) × 1 /N is introduced into the original Tent chaotic map expression [3]. Then the improved Tent chaos mapping expression is as follows:

Insert image description here

Where: N is the number of particles in the sequence. The introduction of the random variable rand(0, 1) /N not only maintains the randomness, ergodicity, and regularity of the Tent chaotic map, but also effectively prevents iterations from falling into small periodic points and unstable periodic points. The random variables introduced in the algorithm of this article not only maintain the randomness, but also control the random values ​​within a certain range, ensuring the regularity of Tent chaos.

Insert image description here

The figure shows the initial distribution of chaotic sequences in the two-dimensional area generated by Logistic, Tent and improved Tent chaos mapping. It can be observed that the distribution uniformity of the improved Tent chaos map is better, so this article uses improved Tent chaos to It replaces the random initialization of the sparrow search algorithm to improve the distribution quality of the initial population in the search space, strengthen its global search capability, and thereby improve the algorithm's solution accuracy.

2.2 Adaptive change of proportion

In the SSA algorithm, the ratio of the number of discoverers to joiners remains unchanged. This will result in that in the early iteration, the number of discoverers is relatively small and it is impossible to fully search the whole world. In the late iteration, the number of discoverers is relatively small. At this time, there is no need for more discoverers to conduct global searches, but it is necessary to increase the number of joiners for precise local searches. In order to solve this problem, a discoverer-joiner adaptive adjustment strategy is proposed. In this strategy, in the early stage of iteration, discoverers can account for the majority of the population. As the number of iterations increases, the number of discoverers adaptively decreases, and the number of joiners decreases. Adaptive increase gradually changes from global search to local precise search to improve the convergence accuracy of the algorithm as a whole. The adjustment formula for the number of discoverers and joiners is
Insert image description here

Insert image description here

In the formula: pNum is the number of discoverers; sNum is the number of joiners; b is the proportional coefficient, used to control the number between discoverers and joiners; k is the perturbation deviation factor, which perturbs the nonlinear decrease value r. As shown in the figure, the ratio of the number of discoverers and followers proposed in this article gradually converges as the iteration proceeds, and a balance can be achieved between early global search and late local optimization.
Insert image description here

2.3 Dynamic inertia weight

According to the SSA algorithm mechanism, it is noted that the movement process of the discoverer toward the optimal solution from each iteration tends to show a "jumping" step state. This mode is certainly beneficial to improving the convergence speed of the algorithm, but the population is in a shorter period of time. A large number of collections within a time period will reduce the diversity of the search process to a certain extent, and may fall into local extremes due to neglect of search blind spots and insufficient search range. At the same time, the discoverer's utilization of its position remains unchanged during its update process. Inspired by the particle swarm optimization algorithm and literature [4], this paper introduces the perturbation strategy of inertia weight into the sparrow search algorithm to update the discoverer's position, thereby improving the discoverer's global search ability and making the original population better suited. Individuals are perturbed between different locations in their original location to improve information exchange among sparrow populations. The improved finder location update formula is as follows:

Insert image description here

Insert image description here

Among them, wmax and wmin are the maximum and minimum values ​​of weight change, Tmax is the maximum number of iterations, and t is the current number of iterations. In the early stage of the iteration, a larger inertial weight is beneficial to the discoverer's global exploration, while a smaller inertial weight in the later iteration is beneficial to the local search of the discoverer, improving the convergence speed and accuracy of the algorithm.

Insert image description here

2.4 Hierarchy

Formula updated based on scout position:
Insert image description here

When encountering danger, the individual sparrow's escape method is monotonous and narrow, and only the optimal solution of the current state is considered in the update, without considering other sub-optimal solutions, which will prematurely make all individuals converge to the current optimal individual. If the current optimal Individual non-global extreme points will make the results easily fall into local solutions. Therefore, the hierarchy strategy in the Gray Wolf Optimization Algorithm (GWO) is introduced to select the first three historical optimal positions {Xα, Xβ, Xγ} to obtain the potential optimal solution. This strategy can more flexibly search for nearby reliable solutions. Reduces the probability of SSA falling into local optimality. The improved scout position update formula is:
Insert image description here

Among them, the calculation formula for the corresponding weight θi of each wolf is as follows:
Insert image description here

By introducing a hierarchy, the escape range of SSA is expanded. However, in the process of individuals approaching the optimal solution, the individual update position obtained by Eq. is the weighted average sum of the current optimal solution, the suboptimal solution and the second optimal solution, with There is a certain degree of randomness, so the updated position may not be better, that is, the algorithm may not converge or even diverge. In order to enable the algorithm to converge to the optimal solution, according to the "greedy" strategy in differential evolution, only the current optimal cost value is selected for the individual update position. When the fitness of the obtained updated position The location update function is:

Insert image description here

2.5 Algorithm flow
The flow chart of the adaptive sparrow search algorithm with hybrid gray wolf hierarchy is as follows:

Insert image description here

It can be seen that the improvement of the algorithm does not add additional computational burden as a whole.

03 Code directory

Insert image description here

Among them, Main_GWHASSA.m is the main program, with detailed code comments. You can get all the results by running Main_GWHASSA with one click.

The running results include: chaos sequence comparison chart, adaptive parameter change chart, inertia weight chart and iteration chart of the algorithm on each test function. Finally, an excel table will be generated, including the average value of the algorithm for n iterations of each function, the average value, and the operation time, optimal solution. At the same time, the garbled files are also resolved, and the txt file of the main code is given.

Part of the source code:
Insert image description here

04 Algorithm performance

In order to evaluate the effectiveness of the algorithm, performance tests were conducted on multiple test functions and compared with multiple algorithms. The results showed that the improvement was effective.

Insert image description here
Insert image description here

05 Source code acquisition

Public account (KAU’s cloud experimental platform) backend reply: GWHASSA

In addition to these improvement measures, you can also add a random walk and other strategies to the optimal individual at the end of the algorithm to avoid falling into local optimality and further improve its performance.

Kaka will continue to update other intelligent optimization algorithms that have not been introduced in subsequent articles, and provide source code annotated by Kaka for free. From these improved methods, we can see that various algorithm update mechanisms will be An important source of innovation. Maybe you can be inspired to create high-performance algorithms from these algorithms. Therefore, it is necessary to understand more about different algorithms to improve the algorithm. Kaka will also try to be as concise and logical as possible. To help you learn more about other algorithms and provide help to you, if you see this, you might as well give it a like.

references

[1]XUE J K, ShEN B. A novel swarm intelligence optimization approach: sparrow search algorithm [J]. Systems Science & Control Engineering, 2020, 8(1): 22-34.

[2] Jiang Shanhe, Wang Qishen, Wang Julang. A new chaotic hybrid optimization algorithm for SkewTent mapping [J. Control Theory and Applications, 2007, 24(2): 269-273.

[3] Zhang Na, Zhao Zedan, Bao Xiaoan, et al. Improved Tent chaotic gravity search algorithm [J]. Control and Decision, 2020, 35(4): 893-900.

[4] Zhang Dingxue, Guan Zhihong, Liu Xinzhi. An adaptive particle swarm algorithm that dynamically changes inertia weight [J]. Control and Decision, 2008, 23(11): 1253-1257.

Another note: If anyone has optimization problems to be solved (in any field), you can send them to me, and I will selectively update articles that use optimization algorithms to solve these problems.

If this article is helpful or inspiring to you, you can click Like/Reading (ง •̀_•́)ง in the lower right corner (you don’t have to click)

Guess you like

Origin blog.csdn.net/sfejojno/article/details/133936907