Intelligent optimization algorithm - Harris Eagle algorithm (Matlab code implementation)

content

1 written in front

2 Harris Eagle Optimization Mathematical Model

2.1 Position update formula

2.2 Linear decreasing formula for prey energy reduction

2.3 Soft surround

2.4 Hard surround

2.5 Adopt a soft encirclement strategy of progressive rapid dive

2.6 Adopt a hard encirclement strategy of progressive rapid dive

 3 matlab code implementation 

3.1 Code

3.2 Results 


1 written in front

Be funny first:

In nature, the Harris Hawk uses its sharp eyes to scout the environment and track prey. But in the vast stretches of southern Arizona, sometimes life isn't easy. In desert areas, it often takes hours to wait, watch, and track prey.

Harris Hawks Optimizer (HHO) is a meta-heuristic algorithm proposed by Heidari, Mirjalili and others in 2019. I have to say that Professor Mirjalili is really amazing! The Harris eagle, which lives primarily in southern Arizona, is unique because it engages in unique cooperative foraging activities with other family members in the group, while other species of raptors typically hunt their prey alone. . Because of this, the unique swarm predation behavior of the Harris eagle is well suited to be modeled as a swarm intelligence optimization process.

To discuss how an algorithm is, we must first look at its advantages: the algorithm has a strong global search ability and has the advantages of fewer parameters that need to be adjusted.

 


2 Harris Eagle Optimization Mathematical Model

2.1 Position update formula

Harris Hawk Optimization (HHO) is a population-based optimization idea proposed in the past two years. In the exploration phase of HHO, in order to simulate the state of Harris eagle scouting prey, the individual position of the eagle group is randomly updated as follows:


 

Among them , q , r 1 , r 2 , r 3 , and r 4 are all random numbers in [0,1], ub and lb are the upper and lower limits of the search space; Xrand  is a random individual position; Xrabbit  is the prey position, and Xave is The average position of all individuals within a population. r3 is a scaling factor that further increases the randomness of the strategy once the value of r4 approaches 1. Similar to the whale optimization algorithm , the content in the absolute value can be regarded as the relative distance between two bodies; r 1 is a random scale coefficient, which provides a diversification trend for Harris eagle's habitat and enables it to explore the feature space of different regions . 


2.2 Linear decreasing formula for prey energy reduction

Similar to other swarm intelligence algorithms, the transition between the exploration and development phases of HHO is controlled by a linear decreasing equation that simulates the reduction of prey energy. The Harris Hawk can switch between different states according to the escape energy of the prey (Little Rabbit). During the escape of the prey, its escape energy will be greatly reduced:

                                      

In the formula, E is the initial energy of the prey, which is updated in the range of (-1, 1) at each iteration; Maxiter represents the maximum iteration algebra.

In view of the differences in escape energy among different prey, the original text makes E0 (the initial value of escape energy) to vary randomly within [-1, 1] during the algorithm iteration process. Regarding the initial value of this escape energy, the original text gives the following explanation:

When the value of E0 decreases from 0 to -1, the rabbit is physically flagging, whilst when the value of E0 increases from 0 to 1, it means that the rabbit is strengthening. The dynamic escaping energy E has a decreasing trend during the iterations.

When E0 decreases from 0 to -1 (E0<0), is the rabbit "more and more tired"? In fact, it can be understood that Tutu's constant escape consumes a lot of energy, so it becomes more and more empty; when E0 increases from 0 to 1 (E0>0), Tutu is in the energy recovery stage. When | |≥1, Harris Hawk searches different areas to further explore the position of the prey, which corresponds to the global search stage; when | |<1, Harris Hawk performs local exploration for adjacent solutions, so Corresponds to the local development stage.


On the other hand, the development stage in HHO can be divided into the following 4 strategies according to the different chase modes of Harris Hawk:

Before talking about the four strategies, we must first mention the premise of the attack. Suppose r  is the rabbit escape probability, r  < 0.5 is successful escape, and r  ≥ 0.5 is escape failure. Typically, Harris hawks capture their prey in a tough or gentle siege (one soft and one firm, have to admire the author's imagination). This siege style means that the Harris Hawk will attack the prey softly or hard from different directions depending on the remaining energy of the prey. In real-world situations, Harris hawks get closer and closer to the intended prey and increase the chances of cooperative killing by raiding. Over time, the prey will lose more and more energy, at which point the Harris eagle intensifies the siege process to capture the prey. In this process, the role of escape energy is self-evident. The original text assumes that when | E  |≥0.5, a gentle siege is performed; when | |<0.5, a tough siege is performed.

2.3  Soft surround

It can be expressed by the following formula :

                  

Among them , J  is the jumping distance of the rabbit during the escape process, and J =2*(1-rand).

If the enemy is strong, I will retreat, and if the enemy is tired, I will advance. Lao Mao's big strategy has been used, praise!

Tutu has too much energy, so let’s start with soft drops:

               

 

2.4  Hard surround

The corresponding mathematical expression:
When the rabbit is exhausted, it looks like this:

 

 

2.5  Adopt a soft encirclement strategy of progressive rapid dive

LF( ) is the mathematical expression of Levi's flight. Based on the known dimension D of the group, the update strategy can be expressed as:


2.6  Adopt a hard encirclement strategy of progressive rapid dive

Similar to the previous strategy, the corresponding update strategy can be expressed as:

 3 matlab code implementation 

3.1 Code

%%===欢迎关注公众号:电力系统与算法之美===
clear
close all
clc
 
SearchAgents_no = 30 ; % 种群规模
dim = 10 ; % 粒子维度
Max_iter = 1000 ; % 迭代次数
ub = 5 ;
lb = -5 ;
%% 初始化猎物位置和逃逸能量
Rabbit_Location=zeros(1,dim);
Rabbit_Energy=inf;
 
%% 初始化种群的位置
Positions= lb + rand(SearchAgents_no,dim).*(ub-lb) ;
 
Convergence_curve = zeros(Max_iter,1);
 
%% 开始循环
for t=1:Max_iter
    for i=1:size(Positions,1)
        FU=Positions(i,:)>ub;FL=Positions(i,:)<lb;Positions(i,:)=(Positions(i,:).*(~(FU+FL)))+ub.*FU+lb.*FL;
        fitness=sum(Positions(i,:).^2);
        if fitness<Rabbit_Energy
            Rabbit_Energy=fitness;
            Rabbit_Location=Positions(i,:);
        end
    end
    
    E1=2*(1-(t/Max_iter));
    %% 鹰群的个体位置位置更新
    for i=1:size(Positions,1)
        E0=2*rand()-1; %-1<E0<1
        Escaping_Energy=E1*(E0); 
        
        if abs(Escaping_Energy)>=1
            %%

            q=rand();
            rand_Hawk_index = floor(SearchAgents_no*rand()+1);
            X_rand = Positions(rand_Hawk_index, :);
            if q<0.5
                %%
                Positions(i,:)=X_rand-rand()*abs(X_rand-2*rand()*Positions(i,:));
            elseif q>=0.5
                
                Positions(i,:)=(Rabbit_Location(1,:)-mean(Positions))-rand()*((ub-lb)*rand+lb);
            end
            
        elseif abs(Escaping_Energy)<1
            %% 
           
            %% phase 1
            
            
            r=rand(); 
            
            if r>=0.5 && abs(Escaping_Energy)<0.5 
                Positions(i,:)=(Rabbit_Location)-Escaping_Energy*abs(Rabbit_Location-Positions(i,:));
            end
            
            if r>=0.5 && abs(Escaping_Energy)>=0.5  
                Jump_strength=2*(1-rand()); 
                Positions(i,:)=(Rabbit_Location-Positions(i,:))-Escaping_Energy*abs(Jump_strength*Rabbit_Location-Positions(i,:));
            end
            
            %% phase 2
            if r<0.5 && abs(Escaping_Energy)>=0.5
                Jump_strength=2*(1-rand());
                X1=Rabbit_Location-Escaping_Energy*abs(Jump_strength*Rabbit_Location-Positions(i,:));
                
                if sum(X1.^2)<sum(Positions(i,:).^2)
                    Positions(i,:)=X1;
                else 
                    beta=1.5;
                    sigma=(gamma(1+beta)*sin(pi*beta/2)/(gamma((1+beta)/2)*beta*2^((beta-1)/2)))^(1/beta);
                    u=randn(1,dim)*sigma;v=randn(1,dim);step=u./abs(v).^(1/beta);
                    o1=0.01*step;
                    X2=Rabbit_Location-Escaping_Energy*abs(Jump_strength*Rabbit_Location-Positions(i,:))+rand(1,dim).*o1;
                    if (sum(X2.^2)<sum(Positions(i,:).^2))% improved move
                        Positions(i,:)=X2;
                    end
                end
            end
            
            if r<0.5 && abs(Escaping_Energy)<0.5
                Jump_strength=2*(1-rand());
                X1=Rabbit_Location-Escaping_Energy*abs(Jump_strength*Rabbit_Location-mean(Positions));
                
                if sum(X1.^2)<sum(Positions(i,:).^2) 
                    Positions(i,:)=X1;
                else 
                   
                    beta=1.5;
                    sigma=(gamma(1+beta)*sin(pi*beta/2)/(gamma((1+beta)/2)*beta*2^((beta-1)/2)))^(1/beta);
                    u=randn(1,dim)*sigma;v=randn(1,dim);step=u./abs(v).^(1/beta);
                    o2=0.01*step;
                    X2=Rabbit_Location-Escaping_Energy*abs(Jump_strength*Rabbit_Location-mean(Positions))+rand(1,dim).*o2;
                    if (sum(X2.^2)<sum(Positions(i,:).^2))% improved move
                        Positions(i,:)=X2;
                    end
                end
            end
            %%
        end
    end
    
    Convergence_curve(t)=Rabbit_Energy;
    
    if mod(t,50)==0
        display(['At iteration ', num2str(t), ' the best fitness is ', num2str(Rabbit_Energy)]);
    end
end
figure('unit','normalize','Position',[0.3,0.35,0.4,0.35],'color',[1 1 1],'toolbar','none')
subplot(1,2,1);
x = -5:0.1:5;y=x;
L=length(x);
f=zeros(L,L);
for i=1:L
    for j=1:L
       f(i,j) = x(i)^8+y(j)^8;
    end
end
surfc(x,y,f,'LineStyle','none');
xlabel('x_1');
ylabel('x_2');
zlabel('F')
title('Objective space')
 
subplot(1,2,2);
semilogy(Convergence_curve,'Color','r','linewidth',1.5)
title('Convergence_curve')
xlabel('Iteration');
ylabel('Best score obtained so far');
 
axis tight
grid on
box on
legend('HHO')
display(['The best solution obtained by HHO is : ', num2str(Rabbit_Location)]);
display(['The best optimal value of the objective funciton found by HHO is : ', num2str(Rabbit_Energy)]);
 

3.2 Results 

 

Guess you like

Origin blog.csdn.net/weixin_46039719/article/details/124122430