Combined Multi-Strategy Improved Adaptive Harris Hawk Optimization Algorithm (LTWHHO) - with Code

Combined Multi-Strategy Improved Adaptive Harris Hawk Optimization Algorithm (LTWHHO)


Abstract: Aiming at the shortcomings of the standard Harris Hawk optimization algorithm (HHO), which is easy to fall into local optimum, low optimization accuracy and unsatisfactory convergence speed, an improved adaptive Harris Hawk optimization algorithm (LTWHHO) combining multiple strategies is proposed. In terms of improving population diversity, the improved Logistic map is used to initialize, and the nonlinear function is proposed to improve the linear decrease parameter of the escape energy operator in order to balance the global exploration and local development. In the global exploration stage, the T distribution strategy is introduced to improve the convergence speed and accuracy. In the local In the development stage, an adaptive dynamic disturbance mechanism is introduced to enhance the ability to jump out of local optimum.

1. Harris Eagle optimization algorithm

The specific principle reference of the basic Harris Eagle optimization algorithm, my blog: https://blog.csdn.net/u011835903/article/details/108528147

2. Improved Harris Eagle optimization algorithm

2.1 Logistic initialization strategy

Logistic chaotic map is often used to initialize the population, which has wide distribution, strong randomness and autocorrelation [9] { }^{[9]}[ 9 ] , which can make the Harris hawk population more diverse and more evenly distributed. Inspired by literature [10], the original Logistic chaotic mapping function is improved, and the objective function is set as shown in formula (3), and the cascaded chaotic sequence [ yn ] \left[y_n\right][yn] :
min ⁡ f ( x 1 , x 2 , ⋯   , x n ) , l b i < x i < u b i . ( 3 ) { x n + 1 ′ = 4 x n ′ ( 1 − x n ′ ) , y n ′ = 1 π arcsin ⁡ ( 2 x n + 1 ′ − 1 ) − 1 2 , x n + 1 = 4 y n ′ ( 1 − y n ′ ) , y n = 1 π arcsin ⁡ ( 2 x n + 1 − 1 ) − 1 2 . ( 4 ) \begin{aligned} & \min f\left(x_1, x_2, \cdots, x_n\right), l b_i<x_i<u b_i . \quad(3)\\ & \left\{\begin{array}{l} x_{n+1}^{\prime}=4 x_n^{\prime}\left(1-x_n^{\prime}\right), \\ y_n^{\prime}=\frac{1}{\pi} \arcsin \left(2 x_{n+1}^{\prime}-1\right)-\frac{1}{2}, \\ x_{n+1}=4 y_n^{\prime}\left(1-y_n^{\prime}\right), \\ y_n=\frac{1}{\pi} \arcsin \left(2 x_{n+1}-1\right)-\frac{1}{2} .\quad(4) \end{array}\right. \end{aligned} minf(x1,x2,,xn),lbi<xi<ubi.(3) xn+1=4x _n(1xn),yn=Pi1arcsin( 2x _n+11)21,xn+1=4yn(1yn),yn=Pi1arcsin( 2x _n+11)21.(4)
where nnn is the population size,ubiu b_iubiJapanese lbil b_ilbiIndicates the upper and lower bounds of the solution space. Then to [ yn ] \left[y_n\right][yn] for linear transformation, as shown in formula (5), the initial position of the Harris Eagle is obtained:
P i = lbi ​​+ ( ubi − lbi ) yn . (5) P_i=l b_i+\left(u b_i-l b_i\right ) y_n .\tag{5}Pi=lbi+(ubilbi)yn.(5)

2.2 Escape energy operator decrement improvement strategy

Decrease control parameter E 1 E_1 in the original algorithmE1It is a linear decreasing function. During the actual hunting process of the flock of eagles, the prey has behaviors such as fast escape and short rest, and its energy decline is not a linear decreasing process. In this paper, the linear decreasing control parameters E 1 E_1E1Make improvements, as shown in formula (6 7):
E 1 = 1 − 1 1 + e − λ , ( 6 ) λ = ( t max ⁡ ) 1 4 ( 2 tt max − 1 ) . ( 7 ) \begin{ aligned} & E_1=1-\frac{1}{1+\mathrm{e}^{-\lambda}}, \quad(6)\\ & \lambda=\left(t_{\max }\right) ^{\frac{1}{4}}\left(\frac{2 t}{t_{\text {max }}}-1\right) .\quad(7) \end{aligned}E1=11+el1,(6)l=(tmax)41(tmax 2 t1).(7)
Set the maximum number of iterations t max ⁡ = 1000 t_{\max }=1000tmax=1000 , the improved energy decline control parameterE 1 E_1E1The decline is slow in the early stage, and with the rapid decline in the process of hunting, the decline rate slows down in the later stage.

2.3 Distribution strategy

Let X ∼ N ( 0 , 1 ) , Y ∼ χ 2 ( n ) X \sim N(0.1), Y \sim \chi^2(n)XN(0,1),Yh2 (n), 令T = XY / n T=\frac{X}{\sqrt{Y / n}}T=and / n X, then the degrees of freedom are called nnn 'sTTT distribution, denoted asT ( n ) T(n)T ( n ) . When the number of iterationsttWhen t is used as a degree of freedom parameter, ttThe t value increases from small to large,T ( t ) T(t)T ( t ) presents the characteristics of Cauchy distribution and Gaussian distribution, which improves the global optimization ability and collection accuracy of the algorithm[ 11 ] { }^{[11]}[ 11 ] . Add T \mathrm{T}in the global exploration phase of the algorithmT distribution strategy, Harris Eagle position update is shown in formula (8):
P ( t + 1 ) = P best ( t ) + P best ( t ) × T ( t ) . (8) P(t+1) =P_{\text {best }}(t)+P_{\text {best }}(t) \times T(t) \text {. }\tag{8}P(t+1)=Pbest (t)+Pbest (t)×T(t)( 8 )
whereP ( t + 1 ) P(t+1)P(t+1 ) isthe t + 1 t+1t+The position of the Harris Eagle at 1 iteration; P best ( t ) P_{\text {best }}(t)Pbest ( t ) is thettthThe best individual position at t iterations.

2.4 Adaptive dynamic disturbance mechanism

The inertial weight factor is an important parameter to balance global exploration and local development. When its value is larger, the algorithm has a stronger global exploration ability, and when its value is smaller, the algorithm has a stronger local development ability [12]. When ∣ E ∣ < 1 |E|<1E<At 1 , the Harris Eagle optimization algorithm enters the local development stage. In order to avoid the algorithm from falling into the local optimum and improve the local optimization ability, an adaptive dynamic disturbance mechanism is proposed in combination with the inertia weight factor, as shown in formula (9) :
ω = − cos ⁡ ( t 2 t max ⁡ π ) + ( ω initial + ω tmax ⁡ ) 2 . (9) \omega=-\cos \left(\frac{t}{2 t_{\max }} \pi\right)+\frac{\left(\omega_{\text {initial }}+\omega_{\operatorname{tmax}}\right)}{2} .\tag{9}oh=cos(2t _maxtp )+2( ohinitial +ohtmax).( 9 )
whereω initial = 1 \omega_{\text {initial }}=1ohinitial =1 is the initial disturbance weight,ω max ⁡ = 0.5 \omega_{\max }=0.5ohmax=0.5 is the disturbance weight corresponding to the maximum number of iterations. After introducing the adaptive dynamic perturbation mechanism, the position update of the Harris Eagle is shown in formula (10):
P best ′ ( t ) = ω × P best ( t ) . (10) P_{\text {best }}^{\ prime}(t)=\omega \times P_{\text {best }}(t) .\tag{10}Pbest (t)=oh×Pbest (t).(10)

insert image description here

3. Experimental results

insert image description here

4. References

[1] Luo Junxing. Adaptive Harris Eagle Optimization Algorithm Improved by Combining Multiple Strategies [J]. Journal of Zhangzhou Vocational and Technical College, 2023, 25(01): 84-90+102. DOI: 10.13908/j.cnki.issn1673- 1417.2023.01.0013.

5. Matlab code

6.python code

Guess you like

Origin blog.csdn.net/u011835903/article/details/131444013