Intelligent optimization algorithm - gray wolf optimization algorithm (Python&Matlab implementation)

content

1 Basic idea of ​​grey wolf optimization algorithm

2 The process of grey wolf hunting prey

2.1 Social class stratification

2.2 Surrounding the prey

2.3 Hunting

2.4 Attacking prey

2.5 Finding prey

3 Implementation steps and block diagram

3.1 Steps

3.2 Block Diagram

4 Python code                               implementation

5 Matlab implementation


1 Basic idea of ​​gray wolf optimization algorithm

The gray wolf optimization algorithm is a group intelligent optimization algorithm. Its unique feature is that a small group of gray wolves with absolute right to speak lead a group of gray wolves to advance toward the prey. Before understanding the characteristics of the gray wolf optimization algorithm, it is necessary to understand the hierarchy in the gray wolf pack .

                                     

Gray wolves are generally divided into 4 levels : the gray wolves in the first level are represented by α , the gray wolves in the second level are represented by β , the gray wolves in the third level are represented by δ , and the gray wolves in the fourth level are represented by δ. It is represented by ω . According to the above classification, gray wolf α has absolute dominance over gray wolves β, δ and ω; gray wolf ω has absolute dominance over gray wolves δ and ω; gray wolf δ has absolute dominance over gray wolf ω .

2 The process of grey wolf hunting prey

The GWO optimization process includes the steps of gray wolf social stratification, tracking, encircling and attacking prey.

2.1  Social class stratification

When designing a GWO, the first step is to construct a model of the gray wolf social hierarchy. Calculate the fitness of each individual in the population, and mark the three gray wolves with the best fitness as \alpha, \beta, \deltaand the remaining gray wolves as \omega. That is to say, the social rank in the gray wolf group from high to low is \alpha, \beta, \deltaand \omega. The optimization process of GWO is mainly guided by the best three solutions (ie   \alpha, \beta, \delta) in each generation of population.

2.2  Surrounding the prey

Grey wolf packs gradually approach and surround their prey through the following formulas:

                            

where t is the current iterative algebra, A and C are coefficient vectors, and Xp and X are the position vector of the prey and the position vector of the gray wolf, respectively. The formulas for calculating A and C are as follows:  

where a is the convergence factor, and as the number of iterations decreases linearly from 2 to 0, r1 and r2 obey a uniform distribution between [0, 1].

2.3 Hunting

The other gray wolf individuals Xi in the wolf pack update their positions according to the positions Xa, XB and Xo of α, β and Bai:

                                   

In the formula, Da, Dβ, and D6 represent the distances between a, β, and 5 and other individuals, respectively; Xa, Xβ, and X6 represent the current positions of a, β, and 5, respectively; C1, C2, and C3 are random vectors, and X is the current The location of the gray wolf.
The position update formula of individual gray wolf is as follows:

                                 

2.4  Attacking the prey

In the process of constructing the attack prey model, according to the formula in 2), the decrease of the value of a will cause the value of A to fluctuate accordingly. In other words, A is a random number on the interval [-a, a] (note: [-2a, 2a] in the original author's first paper, corrected to [-a, a] in later papers) vector, where a decreases linearly during the iteration. When A is in the interval [-1, 1], the next moment position of the Search Agent can be anywhere between the current gray wolf and its prey.

2.5 Finding prey

Gray wolves mainly rely on  information from \alpha, \beta, and , \delta to find their prey. They began to scatter to search for prey location information, and then concentrated to attack the prey. For the establishment of a decentralized model, the search agent is kept away from the prey by |A|>1, which enables GWO to perform a global search. Another search coefficient in the GWO algorithm is C. It can be seen from the formula in 2.2 that the C vector is a vector composed of random values ​​in the interval [0, 2]. This coefficient provides random weights for the prey to increase (|C|>1) or decrease (|C| <1). This helps GWO exhibit random search behavior during optimization to avoid the algorithm getting stuck in local optima. It is worth noting that C does not decline linearly. C is a random value in the iterative process. This coefficient is helpful for the algorithm to jump out of the local area, especially in the later stage of the iteration.

3 Implementation steps and block diagram

3.1 Steps

Step1: Population initialization: including the population number N, the maximum number of iterations Maxlter, control parameters a, A, C. Step2: Randomly initialize the position X of the gray wolf individual according to the upper and lower bounds of the variables.
Step3: Calculate the fitness value of each wolf, save the position information of the wolf with the best fitness value in X_{\alpha }the population, save the position information of the wolf with the second best fitness value in X_{\beta }the population as, and save the third fitness value in the population The location information of the excellent gray wolf is saved as X_{\gamma }.
Step4: Update the position of individual X of the gray wolf.
step5: Update parameters a, A and C.
Step6: Calculate the fitness value of each gray wolf, and update the optimal position of the three wolves.
Step7: Determine whether the maximum number of iterations Maxlter is reached. If it is satisfied, the algorithm stops and returns the value of Xa as the final optimal solution. Otherwise, go to Step4.

3.2 Block Diagram

4 Python code implementation


#=======导入线管库======
import random
import numpy

#完整代码见微信公众号:电力系统与算法之美
#输入关键字:灰狼算法

def GWO(objf, lb, ub, dim, SearchAgents_no, Max_iter):

    #===初始化 alpha, beta, and delta_pos=======
    Alpha_pos = numpy.zeros(dim)  # 位置.形成30的列表
    Alpha_score = float("inf")  # 这个是表示“正负无穷”,所有数都比 +inf 小;正无穷:float("inf"); 负无穷:float("-inf")

    Beta_pos = numpy.zeros(dim)
    Beta_score = float("inf")

    Delta_pos = numpy.zeros(dim)
    Delta_score = float("inf")  # float() 函数用于将整数和字符串转换成浮点数。

    #====list列表类型=============
    if not isinstance(lb, list):  # 作用:来判断一个对象是否是一个已知的类型。 其第一个参数(object)为对象,第二个参数(type)为类型名,若对象的类型与参数二的类型相同则返回True
        lb = [lb] * dim  # 生成[100,100,.....100]30个
    if not isinstance(ub, list):
        ub = [ub] * dim

    #========初始化所有狼的位置===================
    Positions = numpy.zeros((SearchAgents_no, dim))
    for i in range(dim):  # 形成5*30个数[-100,100)以内
        Positions[:, i] = numpy.random.uniform(0, 1, SearchAgents_no) * (ub[i] - lb[i]) + lb[
            i]  # 形成[5个0-1的数]*100-(-100)-100
    Convergence_curve = numpy.zeros(Max_iter)

    #========迭代寻优=====================
    for l in range(0, Max_iter):  # 迭代1000
        for i in range(0, SearchAgents_no):  # 5
            #====返回超出搜索空间边界的搜索代理====
            for j in range(dim):  # 30
                Positions[i, j] = numpy.clip(Positions[i, j], lb[j], ub[
                    j])  # clip这个函数将将数组中的元素限制在a_min(-100), a_max(100)之间,大于a_max的就使得它等于 a_max,小于a_min,的就使得它等于a_min。

        

        #===========以上的循环里,Alpha、Beta、Delta===========
        a = 2 - l * ((2) / Max_iter);  #   a从2线性减少到0

        for i in range(0, SearchAgents_no):
            for j in range(0, dim):
                r1 = random.random()  # r1 is a random number in [0,1]主要生成一个0-1的随机浮点数。
                r2 = random.random()  # r2 is a random number in [0,1]

                A1 = 2 * a * r1 - a;  # Equation (3.3)
                C1 = 2 * r2;  # Equation (3.4)
                # D_alpha表示候选狼与Alpha狼的距离
                D_alpha = abs(C1 * Alpha_pos[j] - Positions[
                    i, j]);  # abs() 函数返回数字的绝对值。Alpha_pos[j]表示Alpha位置,Positions[i,j])候选灰狼所在位置
                X1 = Alpha_pos[j] - A1 * D_alpha;  # X1表示根据alpha得出的下一代灰狼位置向量

                r1 = random.random()
                r2 = random.random()

                A2 = 2 * a * r1 - a;  #
                C2 = 2 * r2;

                D_beta = abs(C2 * Beta_pos[j] - Positions[i, j]);
                X2 = Beta_pos[j] - A2 * D_beta;

                r1 = random.random()
                r2 = random.random()

                A3 = 2 * a * r1 - a;
                C3 = 2 * r2;

                D_delta = abs(C3 * Delta_pos[j] - Positions[i, j]);
                X3 = Delta_pos[j] - A3 * D_delta;

                Positions[i, j] = (X1 + X2 + X3) / 3  # 候选狼的位置更新为根据Alpha、Beta、Delta得出的下一代灰狼地址。

        Convergence_curve[l] = Alpha_score;

        if (l % 1 == 0):
            print(['迭代次数为' + str(l) + ' 的迭代结果' + str(Alpha_score)]);  # 每一次的迭代结果

#========函数==========
def F1(x):
    s=numpy.sum(x**2);
    return s

#===========主程序================
func_details = ['F1', -100, 100, 30]
function_name = func_details[0]
Max_iter = 1000#迭代次数
lb = -100#下界
ub = 100#上届
dim = 30#狼的寻值范围
SearchAgents_no = 5#寻值的狼的数量
x = GWO(F1, lb, ub, dim, SearchAgents_no, Max_iter)

                              

Complete code: intelligent optimization algorithm - gray wolf optimization algorithm (Python&Matlab implementation) 

5 Matlab implementation

% 主程序 GWO
clear
close all
clc
 
%%完整代码见微信公众号:电力系统与算法之美

%输入关键字:灰狼算法

SearchAgents_no = 30 ; % 种群规模
dim = 10 ; % 粒子维度
Max_iter = 1000 ; % 迭代次数
ub = 5 ;
lb = -5 ;
 
%% 初始化三匹头狼的位置
Alpha_pos=zeros(1,dim);
Alpha_score=inf; 
 
Beta_pos=zeros(1,dim);
Beta_score=inf; 
 
Delta_pos=zeros(1,dim);
Delta_score=inf; 
 

 
Convergence_curve = zeros(Max_iter,1);
 
%% 开始循环
for l=1:Max_iter
    for i=1:size(Positions,1)  
        
       %% 返回超出搜索空间边界的搜索代理
        Flag4ub=Positions(i,:)>ub;
        Flag4lb=Positions(i,:)<lb;
        Positions(i,:)=(Positions(i,:).*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb;               
        
        %% 计算每个搜索代理的目标函数
        fitness=sum(Positions(i,:).^2);
        
        %% 更新 Alpha, Beta, and Delta
        if fitness<Alpha_score 
            Alpha_score=fitness; % Update alpha
            Alpha_pos=Positions(i,:);
        end
        
        if fitness>Alpha_score && fitness<Beta_score 
            Beta_score=fitness; % Update beta
            Beta_pos=Positions(i,:);
        end
        
        if fitness>Alpha_score && fitness>Beta_score && fitness<Delta_score 
            Delta_score=fitness; % Update delta
            Delta_pos=Positions(i,:);
        end
    end
    
    
    a=2-l*((2)/Max_iter); % a decreases linearly fron 2 to 0
    
    %% 更新搜索代理的位置,包括omegas
    for i=1:size(Positions,1)
        for j=1:size(Positions,2)     
                       
            r1=rand(); % r1 is a random number in [0,1]
            r2=rand(); % r2 is a random number in [0,1]
            
            A1=2*a*r1-a; % Equation (3.3)
            C1=2*r2; % Equation (3.4)
            
            D_alpha=abs(C1*Alpha_pos(j)-Positions(i,j)); % Equation (3.5)-part 1
            X1=Alpha_pos(j)-A1*D_alpha; % Equation (3.6)-part 1
                       
            r1=rand();
            r2=rand();
            
            A2=2*a*r1-a; % Equation (3.3)
            C2=2*r2; % Equation (3.4)
            
            D_beta=abs(C2*Beta_pos(j)-Positions(i,j)); % Equation (3.5)-part 2
            X2=Beta_pos(j)-A2*D_beta; % Equation (3.6)-part 2       
            
            r1=rand();
            r2=rand(); 
            
            A3=2*a*r1-a; % Equation (3.3)
            C3=2*r2; % Equation (3.4)
            
            D_delta=abs(C3*Delta_pos(j)-Positions(i,j)); % Equation (3.5)-part 3
            X3=Delta_pos(j)-A3*D_delta; % Equation (3.5)-part 3             
            
            Positions(i,j)=(X1+X2+X3)/3;% Equation (3.7)
            
        end
    end
  
    Convergence_curve(l)=Alpha_score;
    disp(['Iteration = ' num2str(l)  ', Evaluations = ' num2str(Alpha_score)]);
 
end
%========可视化==============
figure('unit','normalize','Position',[0.3,0.35,0.4,0.35],'color',[1 1 1],'toolbar','none')
%% 目标空间
subplot(1,2,1);
x = -5:0.1:5;y=x;
L=length(x);
f=zeros(L,L);
for i=1:L
    for j=1:L
       f(i,j) = x(i)^2+y(j)^2;
    end
end
surfc(x,y,f,'LineStyle','none');
xlabel('x_1');
ylabel('x_2');
zlabel('F')
title('Objective space')
%% 狼群算法 
subplot(1,2,2);
semilogy(Convergence_curve,'Color','r','linewidth',1.5)
title('Convergence_curve')
xlabel('Iteration');
ylabel('Best score obtained so far');
 
axis tight
grid on
box on
legend('GWO')
display(['The best solution obtained by GWO is : ', num2str(Alpha_pos)]);
display(['The best optimal value of the objective funciton found by GWO is : ', num2str(Alpha_score)]);
 
        

 

                    

 

Guess you like

Origin blog.csdn.net/weixin_46039719/article/details/123598482