Improved Beluga Optimization Algorithm (original author)

1. Algorithm inspiration

  Beluga whale optimization (BWO) is a meta-heuristic optimization algorithm proposed in 2022, which is inspired by the life behavior of beluga whales. Known for their pure white color as adults, beluga whales are highly social animals that can gather in groups of 2 to 25 members, with an average of 10 members. Similar to other metaheuristic methods, BWO includes an exploration phase and an exploitation phase. In addition, the algorithm also simulates the whale falling phenomenon that exists in the biological world.

2. Algorithm introduction

2.1 Initialization

  The exploration phase ensures the global search capability in the design space by randomly selecting belugas, and the development phase controls the local search in the design space. To simulate the behavior, the beluga whale is treated as a search agent, moving in the search space by changing its position vector. Furthermore, the probability of a whale falling is taken into account in BWO, which changes the position of the beluga.
  Due to BWO's population-based mechanism, beluga whales are treated as search agents, and each beluga whale is a candidate solution and is updated during the optimization process. The matrix to the search agent position is modeled as
: 1 xn , 2 . . . xn , d ] (1) X{\rm{ = }}\left[ \begin{aligned} { {x_{1,1}}} & { {x_{1,2}} } & {...} & { {x_{1,d}}} \cr { {x_{2,1}}} & { {x_{2,2}}} & {...} & { { x_{2,d}}} \cr \vdots & \vdots & \vdots & \vdots \cr { {x_{n,1}}} & { {x_{n,2}}} & {...} & { {x_{n,d}}} \cr \end{aligned} \right]\tag{1}X= x1,1x2,1xn,1x1,2x2,2xn,2.........x1,dx2,dxn,d ( 1 ) Among them,nnn is the population size of beluga whales,ddd is the dimension of the design variable. For all belugas, the corresponding fitness values ​​are stored as follows:
FX = [ f ( x 1 , 1 , x 1 , 2 , . . . , x 1 , d ) f ( x 2 , 1 , x 2 , 2 , . . . , x 2 , d ) ⋮ f ( xn , 1 , xn , 2 , . . . , xn , d ) ] (2) {F_X} = \left[ \begin{aligned} {f({x_{ 1,1}},{x_{1,2}},...,{x_{1,d}})} \cr {f({x_{2,1}},{x_{2,2} },...,{x_{2,d}})} \cr \vdots \cr {f({x_{n,1}},{x_{n,2}},...,{x_{ n,d}})} \cr \end{aligned} \right]\tag{2}FX= f(x1,1,x1,2,...,x1,d)f(x2,1,x2,2,...,x2,d)f(xn,1,xn,2,...,xn,d) ( 2 )   The BWO algorithm can shift from exploration to development, depending on the balance factorB f B_fBf, its mathematical model is:
B f = B 0 ( 1 − t 2 T ) (3) {B_f} = {B_0}(1 - {t \over {2T}})\tag{3}Bf=B0(12T _t)( 3 ) Among them,ttt is the current iteration,TTT is the maximum number of iterations,B 0 B_0B0at ( 0 , 1 ) (0,1) at each iteration(0,1 ) varies randomly. The exploration phase occurs when the equilibrium factorB f > 0.5 B_f > 0.5Bf>0.5 , while the development phase occurs atB f B_fBf≤0.5. With iteration TTIncrease in T , B f B_fBfThe fluctuation range is from (0, 1) (0,1)(0,1 ) reduced to(0, 0.5) (0,0.5)(0,0.5 ) , indicating that the probability of the development and exploration stages has changed significantly, and the probability of the development stage increases with the iterationTTIt increases with increasing T.

2.2 Exploration stage

  The exploration phase of BWO is established by considering the swimming behavior of beluga whales. Beluga whales can engage in social sex in different positions, such as two pairs of beluga whales swimming closely together in a synchronized or mirrored manner. Therefore, the position of the search agent is determined by the swimming of a pair of beluga whales, whose positions are updated as follows:
{ r 1 ) sin ( 2 π r 2 ) , j = even X i , jt + 1 = X i , pjt + ( X r , p 1 t − r 2 ) , j = odd (4) \left\{ \begin{aligned} &X_{i,j}^{t + 1} = X_{i,{p_j}}^t + (X_{r,{p_1 }}^t - X_{i,{p_j}}^t)(1 + {r_1})sin(2\pi {r_2})&&,j=even \cr &X_{i,j}^{t + 1 } = X_{i,{p_j}}^t + (X_{r,{p_1}}^t - X_{i,{p_j}}^t)(1 + {r_1})\cos (2\pi { r_2})&&,j= odd \cr \end{aligned} \right.\tag{4}{ Xi,jt+1=Xi,pjt+(Xr,p1tXi,pjt)(1+r1) s in ( 2 π r2)Xi,jt+1=Xi,pjt+(Xr,p1tXi,pjt)(1+r1)cos ( 2 π r2),j=even,j=odd( 4 ) Among them,ttt is the current number of iterations,X i , jt + 1 X_{i,j}^{t + 1}Xi,jt+1This is number iiThe i white whale is in thejjthNew position in j dimension, P j ( j = 12... , d ) {P_j}(j = 12...,d)Pj(j=12...,d ) is fromddRandomly selected in d dimension, X i , jt X_{i,j}^tXi,jtThis is number iii white whale injjPosition in j dimension, X i , pjt X_{i,{p_j}}^tXi,pjtand X r , p 1 t X_{r,{p_1}}^tXr,p1tThis is number iii andrrr white whale(rrr represents the current position of a randomly selected beluga),r 1 r_1r1and r 2 r_2r2is in the range (0, 1) (0,1)(0,1 ) The random number is used to enhance the random operator in the exploration phase. sin ( 2 π r 2 ) sin(2πr_2)s in ( 2 π r2)cos ( 2 π r 2 ) cos(2πr_2)cos ( 2 π r2) is used to average random numbers between fins. Depending on the odd and even chosen dimensions, the updated positions reflect the beluga's synchronized or mirrored behavior while swimming or diving.

2.3 Development stage

  The development phase of BWO was inspired by the predatory behavior of beluga whales. Beluga whales can cooperate in foraging and moving based on the location of neighboring beluga whales. Therefore, beluga whales prey on each other by sharing their position information, considering the best candidate solution and other solutions. The strategy of Levy flying was introduced during the development phase of BWO to improve convergence. We assume that they can capture prey using Levy flight strategy, and the mathematical model is expressed as:
X it + 1 = r 3 X bestt − r 4 X it + C 1 ⋅ LF ⋅ ( X rt − X it ) (5) t + 1} = {r_3}X_{best}^t - {r_4}X_i^t + {C_1} \cdot {L_F} \cdot (X_r^t - X_i^t)\tag{5}Xit+1=r3Xbesttr4Xit+C1LF(XrtXit)( 5 ) Among them,TTT is the current number of iterations,X it X_i^tXitand X rt X_r^tXrtThis is number iiThe current position of i beluga whales and a random beluga whale,X it + 1 X_i^{t + 1}Xit+1This is number iiThe new position of the i white whale,X bestt X_{best}^tXbesttis the best position in the whale, r 3 r_3r3and r 4 r_4r4is in the range (0, 1) (0,1)(0,1 ) random number,C 1 = 2 r 4 ( 1 − t / T max ) {C_1} = 2{r_4}(1 - t/{T_{max}})C1=2 r4(1t/Tmax) is used to measure the strength of Levy's random jumps when flying.
   LF L_FLFis the Levy flight function, and its calculation method is as follows:
LF = 0.05 × u × σ ∣ v ∣ 1 / β (6) {L_F} = 0.05 \times { { u \times \sigma } \over { { {\left| v \right|}^{1/\beta }}}}\tag{6}LF=0.05×v1/ bu×p( 6 ) σ = ( Γ ( 1 + β ) × sin ( π β / 2 ) Γ ( ( 1 + β ) / 2 ) × β × 2 ( β − 1 ) / 2 ) 1 / β (7) \sigma = {\left( { { {\Gamma (1 + \beta ) \times sin(\pi \beta /2)} \over {\Gamma ((1 + \beta )/2) \times \beta \times { 2^{(\beta - 1)/2}}}}} \right)^{1/\beta }}\tag{7}p=(C (( 1+b ) /2 )×b×2( β 1 ) /2C ( 1+b )×s in ( π β /2 ))1/ b( 7 ) Among them,uuu sumvvv is a random number from a normal distribution,β ββ is the default constant, equal to1.5 1.51.5

2.4 Whale falling stage

  During their migration and foraging, beluga whales are threatened by killer whales, polar bears and humans. Most beluga whales are smart and can evade threats by sharing information with each other. However, a few beluga whales did not survive and fell to the bottom of the sea. This phenomenon is known as the "whale crash" and feeds a huge number of creatures. Large numbers of sharks and invertebrates gather to feed on the whale carcasses, and the exposed bones and carcasses of dead whales attract large numbers of hairy crustaceans. Eventually, the skeleton is broken down or colonized by bacteria and corals for decades.
  In order to simulate the behavior of whale falling in each iteration, we choose the probability of whale falling from the population as our subjective hypothesis to simulate small changes in the population. We assume that these beluga whales either moved elsewhere or were shot down and fell into the deep ocean. To keep the population constant, we use the beluga's position and the whale's descent step to establish an updated position. The mathematical model is expressed as:
X it + 1 = r 5 X it − r 6 X rt + r 7 X step (8) {r_7}{X_{step}}\tag{8}Xit+1=r5Xitr6Xrt+r7Xstep( 8 ) Among them,r 5 r_5r5 r 6 r_6 r6and r 7 r_7r7is (0,1)(0,1)(0,1 ) Random number betweenX step X_{step}Xstepis the step length of the whale falling, determined as:
X step = ( ub − lb ) exp ( − C 2 t T ) (9) {X_{step}} = ({u_b} - {l_b})exp( - {C_2 }{t \over T})\tag{9}Xstep=(ublb)exp(C2Tt)( 9 ) Among them,C 2 C_2C2is the step size factor related to the probability of whale decline and population size ( C 2 = 2 W f × n C_2=2W_f ×nC2=2 Wf×n ),ub ubu blb lbl b are the upper and lower bounds of the variables respectively. It can be seen that the step size is affected by the design variables, the number of iterations, and the bounds on the maximum number of iterations.
  In this model, the probability of a whale falling (W f W_fWf) is calculated as a linear function:
W f = 0.1 − 0.05 t / T (10) {W_f} = 0.1 - 0.05t/T\tag{10}Wf=0.10.05t/T( 10 )   The probability of a whale falling changes from0.1 to 0.10.1 dropped to 0.05on the last iteration0.05 , indicating that the danger to the beluga decreases when the beluga moves closer to the food source during the optimization process.

3. Improved Beluga Optimization Algorithm

3.1 Collective action strategies

  Belugas are highly social animals, and behaviors such as hunting and migration are completed by groups. Beluga whales make a wide variety of sounds that can even be heard on the surface. They use many different clicks, chirps, and whistles (ranging from 3-9 kHz) to communicate with each other, but they also use a very unique "bell tone" that is unique to this species of whale. They are able to use their unique sounds to share their location while avoiding their natural predators, killer whales. Because whales eat a lot, they cannot be too close to their companions while hunting, otherwise they will not get enough food. But they cannot be too far apart, otherwise they will be threatened by their natural enemies, killer whales. This article chooses Wk as the critical value to meet its own food needs while minimizing the threat from its natural enemy killer whales. At the same time, as the number of iterations increases, the search agent beluga will gradually grow, and Wk will also increase with the iterations. It increases with the increase of the number of times. The expression of Wk is shown in (11). Wt is the minimum distance between beluga whales, which is set as 0.35 in this article. At the same time, we also assume that they can use Levy flight strategy to capture prey. The mathematical model is expressed as:
WK = t 5 T + W t (11) {W_K} = {t \over {5T}} + {W_t}\tag{11 }WK=5Tt+Wt(11) X i t + 1 = X i t + L F ⋅ X b e s t t − W K ⋅ ( X r t + X i t ) ) (12) X_i^{t{\rm{ + 1}}}{\rm{ = }}X_i^t{\rm{ + }}{L_F} \cdot X_{best}^t - {W_K} \cdot (X_r^t{\rm{ + }}X_i^t))\tag{12} Xit+1=Xit+LFXbesttWK(Xrt+Xit))( 12 ) where T is the current number of iterations,X it X_i^tXitand X rt X_r^tXrtis the current position of the i-th beluga and a random beluga, X it + 1 X_i^{t + 1}Xit+1is the new position of the i-th beluga, X bestt X_{best}^tXbesttIt's the best spot among whales.
Insert image description here

Figure 1. Before strategy execution

Insert image description here

Figure 2. After policy execution

3.2 Small hole imaging strategy

  If the population individual produces an opposite individual at the opposite position of the current position, the two individuals are more likely to be close to the optimal solution, so only a small amount of operations are required to achieve good results. In 2005, Tizhoosh proposed a reverse learning strategy. The principle is to generate a reverse solution based on the known solution, compare the reverse solution, and choose a better solution. Dynamic pinhole imaging strategy is a general strategy proposed by Li in 2022, which imitates the pinhole imaging theory in optics. Compared with ordinary backward learning strategies, pinhole imaging theory is more accurate and can produce more diverse corresponding points. Figure 3 shows a typical theoretical model of pinhole imaging. Applying it to the overall search space, the following mathematical model can be obtained:
X besti, jt − (U bi, j + L bi, j) / 2 (U bi, j + L bi, j) / 2 − X i, jt + 1 = L p L − p (13) { {Xbest_{_{i,j}}^t - (U{b_{i,j}} + L{b_{i,j}})/2} \over {(U{b_{i,j}} + L{b_{i,j}})/2 - X_{_{i,j}}^{t{\rm{ + }}1}}} = { { {L_p}} \over { {{L_{ - p}}}}\tag{13}(Ubi,j+Lbi,j)/2Xi,jt+1X b es ti,jt(Ubi,j+Lbi,j)/2=LpLp( 13 ) whereX besti , jt Xbest_{_{i,j}}^tX b es ti,jtThe current position of the best individual, X it + 1 X_i^{t + 1}Xit+1is the opposite position in pinhole imaging theory. Ubi,j and Lbi,j are the dynamic boundaries of the i-th beluga in the j dimension, and Lp and Lp are the lengths of the virtual candle at the current best position and the opposite position respectively. It is worth noting that the position of the virtual candle in the ocean is also the position of the search agent, but the points representing individual whales do not have a valid length. Therefore, the ratio of two candles can be set to a variable K. From this, we can get:
X i , jt + 1 = ( K + 1 ) ( U bi , j + L bi , j ) − 2 X besti , jt 2 K (13) ^{t{\rm{ + 1}}} = { {(K + 1)(U{b_{i,j}} + L{b_{i,j}}) - 2Xbest_{_{i,j} }^t} \over {2K}}\tag{13}Xi,jt+1=2K _(K+1)(Ubi,j+Lbi,j)2Xbesti,jt( 13 )   When the lengths of the two virtual candles are the same, this strategy evolves into a basic reverse learning strategy. Appropriately adjusting the value of K can change the position of the opposite point, giving a single beluga more search opportunities. In this article, K is set to 1.5×104. In this way, at each iteration, a new opposite point is generated near the center line of the search space. When the opponent's position is better, its position will become a new boundary.
Insert image description here

Fig. 3 Dynamic pinhole imaging strategy

3.3 Quadratic interpolation strategy

  The quadratic interpolation method is a method used to search for extreme points in a determined initial interval and is a curve fitting method. It uses the information of the objective function at several points to form a low-order polynomial that is close to the value of the objective function, and then uses the optimal solution of this expression as the approximate optimal solution of the function. As the interval gradually shortens, the optimal solution of the polynomial becomes the same as the optimal solution of the polynomial. The distance between the optimal points of the original function gradually decreases until it finally meets certain accuracy requirements. The principle is shown in Figure 4.
  Use the function values ​​of the objective function at three different points to form a quadratic polynomial p(x) that is similar to the original function f(x), and use the extreme point of the function p(x) (that is, the root of p(x)=0 ) as the objective function f(x) to approximate the extreme point. Assume that the unimodal interval of the objective function is x1, x3, and the function values ​​at the three points of x1, x2, and x3 are f1, f2, and f3 respectively.
{ p ( x 1 ) = a 0 + a 1 x 1 + a 2 x 1 2 = f 1 p ( x 2 ) = a 0 + a 1 x 2 + a 2 x 2 2 = f 2 p ( x 3 ) = a 0 + a 1 x 3 + a 2 x 3 2 = f 3 (15) \left\{ \begin{aligned} p({x_1}) = {a_0} + {a_1}{x_1} + {a_2} x_1^2 = {f_1} \cr p({x_2}) = {a_0} + {a_1}{x_2} + {a_2}x_2^2 = {f_2} \cr p({x_3}) = {a_0} + {a_1}{x_3} + {a_2}x_3^2 = {f_3} \cr \end{aligned} \right.\tag{15} p(x1)=a0+a1x1+a2x12=f1p(x2)=a0+a1x2+a2x22=f2p(x3)=a0+a1x3+a2x32=f3( 15 )   The extreme point of p(x) is:
p (x) ′ = a 1 + 2 a 2 x = 0 (16) p{(x)'} = {a_1} + 2{a_2}x = 0 \tag{16}p(x)=a1+2a _2x=0( 16 ) Obtain the values ​​of coefficients a1, a2 and a3 and substitute them into equations (15) and (16) to get:
xp ∗ = ( x 2 2 − x 3 2 ) f 1 + ( x 3 2 − x 1 2 ) f 2 + ( x 1 2 − x 2 2 ) f 3 2 ( x 2 − x 3 ) f 1 + ( x 2 − x 1 ) f 2 + ( x 1 − x 2 ) f 3 (17) x_p^* = { {(x_2^2 - x_3^2){f_1} + (x_3^2 - x_1^2){f_2} + (x_1^2 - x_2^2){f_3}} \over {2({x_2} - {x_3}){f_1} + ({x_2} - {x_1}){f_2} + ({x_1} - {x_2}){f_3}}}\tag{17}xp=2(x2x3)f1+(x2x1)f2+(x1x2)f3(x22x32)f1+(x32x12)f2+(x12x22)f3( 17 )   Substituting the result obtained by formula (17) into the objective function f(x), the function value is fp. If f(x) itself is a quadratic function, it can be found by evaluating according to formula (17) Best point. If f(x) is a function higher than quadratic or other functions, the interval needs to be gradually narrowed. As shown in Figure 4, the black solid line is the function image of f(x), and the blue dotted line is the function image of p(x).
Insert image description here

(a) First iteration (b) Second iteration
Fig. 4 Principle of quadratic interpolation

This article will select the best individual X besti, jt Xbest_{_{i,j}}^t   every time the beluga position is updated.X b es ti,jtWith two other random individuals X rlt X_{_{rl}}^tXrltand X rrt X_{_{rr}}^tXrrt, form a low-order polynomial that is close to the value of the objective function, and use this formula to find the solution X i , j T + 1 X_{_{i,j}}^{T + 1}Xi,jT+1, substituting the above variables into formula (17), the following formula (18) can be obtained:
X i , jt + 1 = 1 2 × [ ( X rl , jt 2 − X besti , jt ) f ( besti , jt − X rr , jt 2 ) f ( X rlt ) + ( X rr , jt 2 − X rl , jt 2 ) f ( ( X rrt ) + ( X besti , jt − X rr , jt ) f ( X rlt ) + ( X rr , jt − X rl , jt ) f ( i,j}}^{t + 1} = {1 \over 2} \times { { [(X{ {_{_{rl,j}}^t}^2} - Xbest_{_{i,j }}^t)f(X_{_{rr}}^t) + (Xbest_{_{i,j}}^t - X{ {_{_{rr,j}}^t}^2} ) f(X_{_{rl}}^t) + (X{ { _{_{rr,j}}^t}^2} - X{ {_{_{rl,j}}^t}^2})f(Xbest_{_{i,j}}^t)]} \over {[(X_{_{rl,j}}^t - Xbest_{_{i,j}}^t)f(X_{_{rr}}^t) + (Xbest_{_{i,j}}^t - X_{_{rr,j}}^t)f(X_{_{rl}}^t) + (X_{_{rr,j}}^t - X_{_{rl,j}}^t)f(Xbest_{_{i,j}}^t) + eps]}}\tag{18} Xi,jt+1=21×[(Xrl,jtX b es ti,jt)f(Xrrt)+( X b es ti,jtXrr , jt)f(Xrlt)+(Xrr , jtXrl,jt)f(Xbesti,jt)+eps][(Xrl,jt2X b es ti,jt)f(Xrrt)+( X b es ti,jtXrr , jt2)f(Xrlt)+(Xrr , jt2Xrl,jt2)f(Xbesti,jt)](18)其中 f ( X b e s t i , j t ) f(Xbest_{_{i,j}}^t) f(Xbesti,jt) f ( X b e s t i , j t ) f(Xbest_{_{i,j}}^t) f(Xbesti,jt) andf ( X rrt ) f(X_{_{rr}}^t)f(Xrrt) respectively representX besti, jt Xbest_{_{i,j}}^tX b es ti,jt X r l t X_{_{rl}}^t Xrltand X rrt X_{_{rr}}^tXrrtfitness value. By using this method, the local search capability of the Beluga search algorithm is greatly improved, thereby improving Beluga's ability to jump out of the local optimum, search accuracy and convergence performance in high-dimensional space.

3.4 IBWO pseudocode

An improved group teaching optimization algorithm (MGTOA) is proposed by improving the exploration phase of BWO and combining two general strategies. Through the above improvement strategy, the exploration capability and convergence speed of MGTOA are enhanced, making the calculation more global.

  1. Initialization parameters T, Tmax, ub, lb, N, dim, Wk.
  2. Initialize population X according to Equations (1).
  3. Calculate the fitness value of all individuals, and select the optimal solution G
  4. While T≤Tmax
  5.   Obtain balance factor Bf by Eq. (3) and probability of whale fall Wf by Eq. (10).
  6.   For i=1:N
  7.     If Bf (i) > 0.5
  8.       The beluga Exploration phase is achieved according to Formula (4).
  9.     Else If Bf (i) ≤ 0.5
  10.       The beluga Exploitation phase is achieved according to Formula (5-7).
  11.     End If
  12.     For i=1:N
  13.       If Bf (i) ≤ Wf
  14.         The beluga Whale fall is achieved according to Formula (8-10).
  15.         Determine whether the new location is within the boundary and calculate the fitness value based on its location
  16.       End If
  17.     End For
  18.     For i=1:N
  19.       Calculate with Dynamic pinhole imaging strategy Formula (13)
  20.     End For
  21.     For i=1:N
  22.       Calculate with Quadratic interpolation strategy Formula (14)
  23.     End For
  24.   End For
  25.   Find the current best solution G
  26.   T = T+1
  27. End While
  28. Output the best solution

4. References

[1] Chen hongmin ; Wang zhuo ; Wu di ; et al, An improved multi-strategy beluga whale optimization for global optimization problems. Mathematical Biosciences and Engineering 2023, 20, 13267-13317.

Guess you like

Origin blog.csdn.net/jiaheming1983/article/details/131490049