Aquila Hawk and Harris Hawk Hybrid Optimization Algorithm (DAHHO) Integrating Dynamic Reverse Learning-with Code

Aquila Hawk and Harris Hawk Hybrid Optimization Algorithm (DAHHO) Integrating Dynamic Backward Learning


Abstract: Aquila optimizer (AO) and Harris hawks optimization (HHO) are optimization algorithms proposed in recent years. The AO algorithm has strong global optimization ability, but its convergence accuracy is low, and it is easy to fall into local optimum, while the HHO algorithm has strong local development ability, but has the defects of weak global exploration ability and slow convergence speed. Aiming at the limitations of the original algorithm, this paper mixes the two algorithms and introduces a dynamic reverse learning strategy, and proposes a hybrid optimization algorithm of Aquila Eagle and Harris Eagle that combines dynamic reverse learning. First, a dynamic reverse learning strategy is introduced in the initialization phase to improve the initialization performance and convergence speed of the hybrid algorithm. In addition, the hybrid algorithm retains the exploration mechanism of AO and the development mechanism of HHO to improve the optimization ability of the algorithm

1. Harris Eagle optimization algorithm

The specific principle reference of the basic Harris Eagle optimization algorithm, my blog: https://blog.csdn.net/u011835903/article/details/108528147

2. Improved Harris Eagle optimization algorithm

2.1 Dynamic reverse learning strategy

This paper introduces a dynamic reverse learning strategy to improve the quality of the initialization solution. The calculation method is as follows:
X DOBL = X init + r 18 × ( r 19 × ( LB + UB − X init ) − X init ) \boldsymbol{X}_ {\text {DOBL }}=\boldsymbol{X}_{\text {init }}+r_{18} \times\left(r_{19} \times\left(\mathrm{LB}+\mathrm{UB }-\boldsymbol{X}_{\text {init }}\right)-\boldsymbol{X}_{\text {init }}\right)XDOBL =Xinit +r18×(r19×(LB+UBXinit )Xinit )
式中: X i n i t X_{\mathrm{init}} XinitRepresents the initialization population generated by random; r 18 r_{18}r18with r 19 r_{19}r19are distributed in 0 ∼ 1 0 \sim 10A random number of 1 . First, the algorithm generates the original initialization populationX init \boldsymbol{X}_{\mathrm{init}}XinitWith the reverse initial population XDOBL \boldsymbol{X}_{\mathrm{DOBL}}XDOBL, and then merge the two populations into a new population X new = { XDOBL ∪ \boldsymbol{X}_{\mathrm{new}}=\left\{\boldsymbol{X}_{\mathrm{DOBL}} \cup\ right.Xnew={ XDOBL X init  } \left.X_{\text {init }}\right\} Xinit } . Calculate the fitness value of the new population, and use the greedy strategy to fully compete within the population, and select the bestNNN individuals are used as the initial population. This method can make the population approach the optimal solution faster, thereby improving the convergence speed of the algorithm.

2.2 Theoretical analysis of improved hybrid algorithm

Aquila eagles will adopt 4 different predation behaviors according to different prey. In the previous iteration, according to the random number r 1 r_1r1Choose from high-altitude flight search or flight around prey, these two exploration methods are mainly aimed at fast-moving prey. Therefore, the optimal position, the average position and the random position of the Aguilar eagle population are considered in the position update, formula (1) uses the best position and the average position of the population to realize the large-scale search of the whole population in the search space, formula (3 ) uses Levi's flight with the best position to realize a large-scale random disturbance of the search space, which reflects the strong global exploration ability of the algorithm. When the number of iterations t > ( 2 3 × T ) t>\left(\frac{2}{3} \times T\right)t>(32×T ) , according to the random numberr 4 r_4r4Choose low-altitude flight attack or ground short-distance attack strategy. These two attack methods are mainly aimed at the slow-moving prey, which reflects the local search ability of the algorithm. However, according to biological characteristics, the Aquila eagle tends to hunt alone, and its ground mobility is weak, so it cannot effectively attack its prey on the ground. From the mathematical description, the local development process does not fully search the selected search space, and the effect of Levy's flight is also weak, and the position update according to the fitness aggravates the stagnation of the local optimum. In addition, the AO algorithm only divides the stages according to the number of iterations, which cannot effectively balance the global stage and the local stage of the algorithm. Therefore, AO \mathrm{AO}The randomness of the global exploration stage of the AO algorithm is strong, the population covers a wide range of search space, and it is not easy to miss the key search information, while the local development stage is easy to fall into the local optimum. HHO realizes the transition from global to local search according to the energy decay of the prey. The exploration process mainly relies on the optimal individual information without communicating with other individuals, which leads to a decrease in population diversity and a slow harvest rate. As the number of iterations increases, the energy of the prey decreases and enters the local development stage. Four different predation strategies are adopted according to the energy and escape probability of the prey. When the escape probabilityr 16 ⩾ 0.5 r_{16} \geqslant 0.5r16When the value is 0.5 , the soft encirclement or hard encirclement is selected through the energy of the prey, and the position is updated according to the distance between itself and the prey and the position of the prey respectively. Otherwise, the soft encirclement or hard encirclement of gradual and rapid dive is selected through the energy of the prey, and the Levy flight item is added in both methods, and the Levy walk or fast dive attack is judged according to the fitness, so that the algorithm can effectively jump out of the local area. best. In general,AO \mathrm{AO}Global search for AO and HHO \mathrm{HHO}The local search of HHO is the core feature of the two algorithms respectively. In this paper,AO \mathrm{AO}The global exploration phase of AO and the local exploration phase of HHO are combined to give full play to the advantages of the two algorithms and retain the global exploration ability of the algorithm, the faster collection speed and the ability to jump out of the local optimum. In addition, a dynamic reverse learning strategy is introduced in the initialization stage, which further improves the quality of the algorithm initialization population, improves the convergence speed and accuracy of the hybrid algorithm, and enhances the overall optimization performance of the algorithm.

The steps of DAHHO algorithm are as follows:

  1. Initialize the population, calculate the dynamic reverse learning population, and retain better individuals according to the greedy strategy to enter the main program iteration.
    2) Calculate the fitness value of the population and record the better individuals.
    3 ) If ∣ E ∣ ⩾ 1 |E| \geqslant 1E1 , the individual randomly chooses formula (1) or formula (3) to start exploring behavior.
  2. If ∣ E ∣ < 1 |E|<1E<1. The population chooses a development strategy according to the energy value and fitness value of the escaping prey:
    strategy 1 soft encirclement. Whenr 16 ⩾ 0.5 r_{16} \geqslant 0.5r160.5 and∣ E ∣ ⩾ 0.5 |E| \geqslant 0.5E0.5 , the subgroup individuals adopt formula (11) to update the position.
    Strategy 2 hard encirclement. Whenr 16 ⩾ 0.5 r_{16} \geqslant 0.5r160.5∣ E ∣ < 0.5 |E|<0.5E<0.5 , subgroup individuals use formula (4) to update their positions.
    Strategy 3 Asymptotic rapid dive soft encirclement. Whenr 16 < 0.5 r_{16}<0.5r16<0.5 and∣ E ∣ ⩾ 0.5 |E| \geqslant 0.5E0.5 , subgroup individuals use formulas (5), (6), and (7) to update their positions.
    Strategy 4 Asymptotic rapid dive hard encirclement. Whenr 16 < 0.5 r_{16}<0.5r16<0.5∣ E ∣ < 0.5 |E|<0.5E<0.5 , subgroup individuals update their positions using formulas (6), (7), and (8).
  3. Judging whether the program meets the termination condition, if so, jump out of the loop, otherwise return to step 2.
  4. Output the best position and fitness value.

3. Experimental results

insert image description here

4. References

[1] Jia Heming, Liu Qingxin, Liu Yuxiang, etc. Hybrid optimization algorithm of Aquila Eagle and Harris Eagle combined with dynamic reverse learning [J]. Journal of Intelligent Systems, 2023, 18(01): 104-116.

5. Matlab code

6.python code

Guess you like

Origin blog.csdn.net/u011835903/article/details/131153140