Significantly improved! | (WOA) Chaos whale optimization algorithm integrating simulated annealing and adaptive mutation is applied to function optimization

Insert image description here
Insert image description here
Insert image description here

The whale optimization algorithm (WOA) is a new group intelligence optimization search method proposed by Mirjalili and Lewis [1] in 2016. It is derived from the simulation of the hunting behavior of humpback whale groups in nature, and is different from other groups. Compared with intelligent optimization algorithms, the WOA algorithm has a novel structure and fewer control parameters. It shows better optimization performance in solving many numerical optimization and engineering problems, and is better than intelligent optimization algorithms such as ant colony algorithm and particle swarm algorithm.

The WOA algorithm also has shortcomings such as low search efficiency, poor convergence ability, and easy falling into local optimality when faced with multi-variable complex problems. Therefore, in order to improve the optimization performance of WOA, this paper proposes a variant of WOA, namely the chaotic whale optimization algorithm that combines simulated annealing and adaptive mutation.

00 Article Directory

1 Principle of whale optimization algorithm

2 Improved whale optimization algorithm

3 Code directory

4 Algorithm performance

5 Source code acquisition

6 Summary

01 Principle of whale optimization algorithm

The principle of the whale optimization algorithm and how to obtain the MATLAB code can be found in the author’s previous articles and will not be repeated here.

02 Improved whale optimization algorithm

2.1 Chaotic reverse learning initialization population

The initialization of a group's algorithm affects its search performance. Since there is no prior information, WOA is usually generated by random initialization. This strategy works in a sense. However, sometimes the distribution of whales in the search domain is not uniform, which may push the whales away from the global optimal solution, making the search time to converge to the optimal solution longer, resulting in a lower convergence speed. Chaos mapping can satisfy the diversity of algorithm searches due to its characteristics such as randomness and sensitivity.

Chaos disturbance equations commonly used in the literature include Logistic mapping and Tent mapping. It can be seen from the literature [3] that the distribution characteristics of logistic mapping are: the probability of the middle value is relatively uniform, but the probability is particularly high at both ends. Therefore, when the global optimal point is not at both ends of the design variable space, it is detrimental to finding the optimal point. The Tent chaos map has a simple structure and has better traversal uniformity and faster search speed than the Logistic chaos map. At the same time, the literature [2] has theoretically proven that population initialization based on reverse learning can obtain better initialization. solution, thereby accelerating the convergence speed. Therefore, this paper takes advantage of the advantages of these two initialization methods and proposes the idea of ​​generating initialization populations based on chaotic mapping and reverse learning.

Considering that Tent mapping is prone to problems at small cycle periods and fixed points, in order to prevent the Tent chaotic sequence from falling into small period points and unstable period points during iteration, random numbers are added to the expression of a typical Tent mapping, that is
Insert image description here

Where: N is the number of particles in the sequence. The introduction of the random variable rand(0, 1) /N not only maintains the randomness, ergodicity, and regularity of the Tent chaotic map, but also effectively prevents iterations from falling into small periodic points and unstable periodic points.
Insert image description here

Secondly, use the chaotic sequence Zkj to generate the corresponding initial population xij:
Insert image description here

Then, generate the reverse population x*ij:
Insert image description here

Finally, compare the initial population and the reverse population, and select N individuals with the best fitness to form the initial population. Through this initialization strategy, we can further obtain high-quality solutions in a relatively evenly distributed population, thereby speeding up the convergence rate.

2.2 Nonlinear convergence factor

There are two main coefficient parameters A and C in the WOA algorithm:
Insert image description here

Where A mainly depends on a, and C mainly depends on r. The global exploration and local development of the whale algorithm are mainly related to A. In other words, the control parameter a plays a crucial role in the convergence speed and search accuracy of the algorithm. When a is large, the algorithm has strong global search ability and is easy to escape from the local optimum, but its local development ability is weak, resulting in a decrease in convergence speed. On the contrary, if a is small, its local development ability is strong and its convergence speed is accelerated, but it is easy to fall into local optimality.

In the traditional WOA algorithm, a linearly decreases from 2 to 0, and in complex optimization problems there are often multiple local optimal values. The linear decrease strategy will affect the ability of the algorithm to escape from the local optimal, so this paper re- Propose a nonlinear convergence factor:
Insert image description here

The simulation of the nonlinear convergence factor a is shown in the figure.
Insert image description here

It can be seen from the figure that in the early stage of iteration, the value of a is larger and the decay speed is slower, and the global exploration ability is stronger, which is helpful to avoid the population falling into the local optimal value. In the later stage of the iteration, a quickly decays to a smaller value, and the local development capability is strong, which is beneficial to accelerating population convergence. Therefore, this nonlinear convergence factor update strategy can be better applied to solving nonlinear complex optimization problems.

2.3 Dynamic inertia weight

Since the prey target has different impacts on the position update of the whale group during the hunting process of whales in the spiral update position, inspired by the literature [3], this paper proposes a dynamic inertia weight strategy, the formula is as follows:
Insert image description here

Among them, wmax and wmin are the maximum and minimum values ​​of weight change, Tmax is the maximum number of iterations, and t is the current number of iterations.

So the spiral position update formula becomes:

Insert image description here

The weight parameter ω obtains a large value in the early stage of iteration, so that the whale algorithm has strong global search ability and prevents it from falling into local extreme values; it obtains a smaller value in the later stage of iteration, at which time the whale algorithm has strong local search ability, and can Accelerate the convergence of the algorithm to obtain the optimal solution.

2.4 Simulated annealing operation and adaptive mutation perturbation

The simulated annealing algorithm (Simulated Annealing, SA) was proposed by Metropolis in 1953 [4]. Its characteristic is to retain inferior groups under certain probability conditions, increase the diversity of the population, and improve the ability to jump out of the local optimum to a certain extent. . This article integrates the simulated annealing idea into the WOA algorithm.

At the same time, considering the later stage of the iteration, due to the search strategy, all whale individuals in the population will gather towards the optimal individual, resulting in a reduction in population diversity. If the optimal individual is the local optimal solution at this time, the algorithm will converge prematurely. In order to prevent such problems, this paper proposes an adaptive mutation perturbation, whose formula is as follows:

Insert image description here

Among them, gaussian is the Gaussian mutation and cauchy is the Cauchy mutation. It can be seen from the above formula that when the algorithm starts running, the t value is small, and the weight of Cauchy mutation is large. A larger step size is obtained through Cauchy mutation, which prevents the algorithm from falling into the local optimal solution. As the algorithm continues to run, the t value is larger, and the weight of Gaussian mutation is larger. The outstanding local search ability of Gaussian mutation allows candidate solutions to be accurately searched in the local range, improving the optimization accuracy of the algorithm.

New solutions are generated through adaptive mutation perturbation, and then the simulated annealing algorithm can accept poorer solutions with a certain probability and jump out of the local optimal value, thereby making up for the flaws of the WOA algorithm.

2.5 Algorithm process

The algorithm flow of this article is as follows:
Insert image description here

03 Code directory

Insert image description here

Among them, Main_AAMCWOA.m is the main program with detailed code annotations. You can get all the running results by running Main_AAMCWOA with one click. The running results include chaos sequence comparison diagrams, control parameter comparison diagrams and algorithm iteration diagrams on each test function. Finally, it will be generated The excel sheet contains the average value, mean value, running time, and optimal solution of the algorithm for n iterations of each function. TestFuc.m can quickly generate iteration graphs of test functions.

At the same time, the garbled files are also resolved, and the txt file of the main code is given.

Part of the code is as follows, and there will be special comments on the improved parts.

Insert image description here

The generated excel table is as follows
Insert image description here

In the figure, 1-5 correspond to AAMCWOA, GWO, WOA, PSO, and GA algorithms respectively.

04 Algorithm performance

The standard test function of 2005 is used to test its optimization performance. This test set is the most widely used and classic test set and contains 23 Benchmark functions, of which F1-F5 are unimodal functions and F6-F12 are basic multimodal functions. Functions, F13-F14 are extended multimodal functions, F15-F23 are multimodal combination functions, the specific information of the functions is as follows:

Insert image description here

The running results are as follows:
Insert image description here
Insert image description here

Insert image description here

Insert image description here

Insert image description here

Insert image description here

It can be seen from the results that the improved whale optimization algorithm has better convergence speed and accuracy in almost all functions, and the improved algorithm has good effects.

05 Source code acquisition

Available on the author’s WeChat public account: KAU’s cloud experimental platform

Note: All figures in the article can be directly run by the program

06 Summary

The improved whale optimization algorithm proposed in this article has good performance and effective improvement. At the same time, the improved strategy in this article can also be generalized. For example, the simulated annealing and adaptive mutation strategies at the end of the algorithm can be used in many algorithms to jump out of the local optimum. At the same time, There is also room for further improvements in this article. For example, the position update strategy of the whale optimization algorithm can be further improved, such as adding Levy flight, differential evolution, etc.

references

[1] MIRJALILI S,LEWIS A. The whale optimization algorithm[J].Advances in Engineering Software,2016,95:51一 67.

[2] Zhang Qiang, Li Panchi. Adaptive grouped chaotic cloud model frog leaping algorithm to solve continuous space optimization problems [J]. Control and Decision, 2015, 30(5): 923-928

[3] Y. Shi and R. Eberhart, “Modified particle swarm optimizer,” in Proc of

IEEE Icec Conference, 1999.

[4] DUPANLOU I,SCHNEIDER S,EXCOFFIER L. A simulated annealing approach to define the genetic struc⁃ ture of populations[J]. Molecular Ecology,2002,11 (12):2571 - 2581.

Another note: If anyone has optimization problems to be solved (in any field), you can send them to me, and I will selectively update articles that use optimization algorithms to solve these problems.

If this article is helpful or inspiring to you, you can click Like/Reading (ง•̀_•́)ง in the lower right corner (you don’t have to click)

Guess you like

Origin blog.csdn.net/sfejojno/article/details/134322851