Three kinds of gray wolf optimization algorithms (Grey Wolf Optimization) and simulation experiments - with code Matalb

Table of contents

Summary:

Gray wolf algorithm principle:

Gray wolf algorithm process:

Improved gray wolf algorithm:

Multi-objective gray wolf algorithm:

The running effect of the three gray wolf algorithms:

(1)LARGE

(2)I-GWO

(3) MO-GWO


Summary:

Gray Wolf Optimization. GWO simulates the predation behavior of gray wolf groups and achieves the goal of optimization based on the mechanism of wolf group cooperation. The GWO algorithm has the characteristics of simple structure, few parameters to be adjusted, and easy implementation. There are self-adaptively adjustable convergence factors and information feedback mechanisms, which can achieve a balance between local optimization and global search. Therefore, in solving problems It has good performance in terms of accuracy and convergence speed. In this paper, three different GWO algorithms are implemented, and the effectiveness of the intelligent algorithm in solving optimization problems is verified through simulation experiments. The key lines of the program are all annotated. The three GWO algorithms implemented are:

  1. Original GWO algorithm
  2. Improved GWO Algorithm (I-GWO)
  3. Multi-objective GWO algorithm (MO-GWO)

Gray wolf algorithm principle:

Gray wolves belong to the canid family and are considered apex predators, they are at the top of the food chain in the biosphere. Gray wolves mostly live in packs, with an average of 5-12 wolves in each pack. Of particular interest is that they have a very strict social hierarchy hierarchy, as shown here.

The first level of the pyramid is the leader of the population, called α. In a pack of wolves, α is an individual with management ability, mainly responsible for various decisions in the group about hunting, sleeping time and place, food distribution and so on.

The second layer of the pyramid is the think tank team of α, called β. β is mainly responsible for assisting α in making decisions. When there is a vacancy in α of the whole wolf pack, β will take over the position of α. The dominance of β in the wolf pack is second only to α, it gives α's orders to other members, and feeds back the execution status of other members to α, which acts as a bridge.

The third layer of the pyramid is δ. δ obeys the decision-making orders of α and β, and is mainly responsible for investigation, sentry, and nursing. α and β with poor fitness will also be reduced to δ.

The bottom of the pyramid is ω, which is mainly responsible for the balance of the internal relationship of the population.

Additionally, group hunting is another fascinating social behavior of gray wolves. The social class of gray wolves plays an important role in the group hunting process, and the predation process is completed under the leadership of α. A gray wolf hunt consists of 3 main parts:

1) Tracking, chasing and approaching prey;

2) chasing, surrounding and harassing prey until it stops moving;

3) Attack the prey.

Gray wolf algorithm process:

The optimization of the GWO algorithm starts from randomly creating a gray wolf population (candidate scheme). During the iterative process, α, β and δ wolves estimate the possible positions of the prey (the optimal solution). Gray wolves update their positions based on their distance from their prey. The parameter a should be decreased from 2 to 0 for exploration and development during the search. If |A|>1, the candidate solution is far away from the prey; if |A|<1, the candidate solution is close to the prey. The flow chart of the GWO algorithm is shown in Fig.

Improved gray wolf algorithm:

GWO has the following disadvantages:

1) The population diversity is poor, which is caused by the initial population generation method of GWO. The method of random initialization to generate the initial population cannot guarantee good population diversity.

2) The late convergence speed is slow, which is caused by the search mechanism of the GWO algorithm. The wolves mainly judge the distance to the prey based on the distance to α, β and δ, which leads to a slower convergence speed in the later stage.

3) It is easy to fall into the local optimum, because the α wolf is not necessarily the global optimum. In the continuous iteration, ω keeps approaching the top three wolves, causing the GWO algorithm to fall into the local optimum.

An improvement is proposed, and the population is initialized by using the good point set theory. When the number of points is the same, the homogenization degree of the point sequence selected by the good point sequence is better than other methods. Therefore, the individual distribution of the initial population generated by the good point set method is uniform, which ensures the diversity of the population, thus laying the foundation for the global optimization of the algorithm.

Multi-objective gray wolf algorithm:

MGWO algorithm flow

Step1: Initialize the wolf pack, calculate the non-dominated solution set Archive in the population (the size is determined), and perform grid calculation on the solutions in the Archive to find the grid coordinate value.
Iteration starts
Step2: Select α, β, σ\alpha, \beta, \sigmaα, β, σ from the initial Archive according to the grid, and update the positions of all individuals in the wolf pack according to the three solutions.
Step3: After all the positions are updated, calculate the non-dominated solution set non_dominates of the updated population.
Step4: Archive update—combine non_dominates and Archive to calculate the non-dominated solution set of the two, and judge whether it exceeds the specified Archive size. If it exceeds, delete it according to the grid coordinates.
This iteration ends
Step5: Determine whether the maximum number of iterations has been reached, if yes, output the Archive. No, go to Step2.

The running effect of the three gray wolf algorithms:

(1)LARGE

(2)I-GWO

(3) MO-GWO

Guess you like

Origin blog.csdn.net/widhdbjf/article/details/130742513