Advanced understanding of genetic algorithm python + paper reproduction (pure dry goods, with predecessors' summary and guidance)


Today is the first day of 2023. First of all, I wish all brothers and sisters a happy new year, progress in school and work, and work smoothly! I have long wanted to do an explanation of the genetic algorithm. The content was mainly done in September 22, but I was too busy to do it in time. Now I roughly organize it as follows for your reference!

The time interval is relatively long, if there are any deficiencies, please point out in the comment area or private message me.

The outline of this article is as shown above. First, it briefly introduces the concept of genetic algorithm and its comparison with other optimization algorithms. The focus is on the latter two, the necessary knowledge and the reproduction of a related paper. Because there are a lot of information on the whole network, I also referred to a lot of blogs when I was studying. I will give the blog links in the third part, which will save you a lot of search time. The fourth part is the most important. I will give some methods and ideas for reproducing the paper, and it has certain versatility.

1. Introduction and related concepts

Introduction to Genetic Algorithms

Genetic algorithm is an intelligent optimization algorithm, and it is also a random search algorithm to find the global optimal solution, which can be used to find the maximum value and parameters. Judging from the name, genetic algorithm borrows Darwin's evolution theory in biology: "survival of the fittest, elimination of the unsuitable", and expressing the theory in the form of an algorithm is the process of genetic algorithm.

The genetic algorithm was proposed by Professor Holland in the United States. It is a random search algorithm that uses the ideas of natural selection and biological evolution to search for the optimal solution in the search space. The genetic algorithm seeks excellent individuals by simulating the operations of reproduction, crossover, and mutation in natural selection, evaluates the quality of individuals with the fitness function, and searches for individuals with higher fitness according to the principle of survival of the fittest, and continuously increases the number of excellent individuals in the search. The number of individuals is repeated until the individual with the highest fitness is found. Genetic algorithm uses a heuristic search method for group search, which is easy to process in parallel. Especially in recent years, with the development of computer applications, the excellent performance of genetic algorithm has attracted people's attention and has been widely used in various fields, such as: function optimization , combination optimization, image processing, etc. 1

Genetic Algorithm is a highly parallel, random and adaptive search algorithm developed by referring to the mechanism of natural selection and evolution in the biological world. It uses swarm search technology to represent the population as a set of problem solutions. By applying a series of genetic operations such as selection, crossover, and mutation to the current population, a new generation of population is generated, and the population is gradually evolved to a state that contains approximate optimal solutions. 2

The second interpretation of the genetic algorithm comes from the article "A synthetic algorithm based on genetic algorithm and Gauss-Newton method in the analysis of radiopharmaceutical biokinetic data", which is also the article to be reproduced in this article, put at Reference 2 at the end of the article.

Introduction to related concepts

Genetic algorithms have some terms corresponding to biological genetics. These terms often appear in other articles or blogs. For ease of understanding, they are given as follows:

① Gene: In biology, it is a basic genetic unit, corresponding to the characteristics of each component in the solution, also known as characteristics and bits.
② Chromosome: a binary string composed of genes, which is the code of the solution, also known as an array or a bit string.
③ locus: the location of a gene in a chromosome. The values ​​that genes in a chromosome take are called alleles.
④Individual: It is an entity with the characteristics of chromosome, which represents the feasible solution of the problem. The collection of multiple individuals is called a population, and the number of individuals contained in the population is called the size of the population, which represents the number of solutions.
⑤ Genotype: Indicates the composition of genes, and its correlation with phenotype is very strong. Corresponding to the genotype is the phenotype, which refers to the external performance of the genotype in the chromosome, that is, the property state shown by the individual organism, and corresponds to the solution space in the genetic algorithm.
⑥Adaptation: Corresponding to the fitness function for evaluating the pros and cons of an individual in the genetic algorithm, it represents the ability of the individual to adapt to the external environment. ⑦ Selection: corresponds to the selection operator in the genetic algorithm.
⑧Crossover: the process of genetic recombination to generate a set of new solutions, corresponding to the crossover operator in the genetic algorithm.
⑨ Mutation: The process of genetic mutation, that is, the process of a certain component of the code changing, corresponding to the mutation operator in the genetic algorithm. ⑩Evolutionary process: the solution process of genetic algorithm. 1

My supplement:
11. Fitness function and objective function : Usually, the fitness function is obtained by the corresponding transformation of the objective function. The algorithm also uses the size of the fitness function to select individuals.
There are three requirements for the design of the fitness function:

  1. non-negativity. The genetic algorithm determines the probability of each individual being retained according to the fitness of the individual, and the probability cannot be negative. Therefore, we need to ensure that the designed fitness function is greater than or equal to 0;
  2. selective. The important operation of the genetic algorithm to realize the "survival of the fittest" of individuals is selection. Individuals with high fitness are retained to participate in the next generation of evolution, and individuals with low fitness are eliminated. Therefore, the fitness function only needs to reflect the relative horizontal position among individuals , but not the absolute horizontal position. (In layman's terms, adding or subtracting a number to the fitness of all individuals at the same time will not change the relative horizontal position of the individual, and naturally will not affect the subsequent selection operation.)
  3. Versatility. The design of the fitness function should meet the universality, so that everyone does not need to modify the parameters in the fitness function when using the fitness.
    There are the following ideas for the design of the fitness function:
    the first type: the

    second type:
    from reference 1
    the design methods of the above two fitness functions are basically the same, adding (subtracting) the minimum value or the minimum value is to make the function non-negative; This method is feasible, because it will not change the relative horizontal position between individuals, and the individuals with high fitness are still retained (for example, assuming that the fitness of 3 individuals is -2, 1, 3 .Now I want to select the individual with the greatest fitness, and subtract the minimum value from them to get: 0, 3, 5, at this time I still choose the third individual). Satisfy the three properties I mentioned above.

But when we write a program, in order to ensure the runnability of the program, we usually add a small positive number behind it, usually le-3.

The code for finding the maximum fitness function is as follows:

def get_fitness(pop): 
    x,y = translateDNA(pop)
	pred = F(x, y)
	return (pred - np.min(pred)) + 1e-3 

The code for finding the minimum fitness function is as follows:

def get_fitness(pop): 
	x,y = translateDNA(pop)
	pred = F(x, y)
	return -(pred - np.max(pred)) + 1e-3

The content of the above two pieces of code is referenced from this blog . The implementation idea of ​​the code is consistent with the analysis I just made above, and this big brother wrote it in detail and well, so I strongly recommend this article.

12. Encoding and decoding : Recall that all we can perform mathematical operations are decimal digits, such as number multiplication and four arithmetic operations, etc. This is a rule. Now the genetic algorithm also has its own set of rules, which are crossover and mutation, and these two items cannot directly operate on decimal numbers, so we need to encode and decode. Encoding is for crossover and mutation operations, and decoding is Converting the result calculated according to this set of rules into decimal is the result we finally require. The most commonly used encoding rule is binary encoding . This article also takes the binary encoding rule as an example to introduce the genetic algorithm.

2. Comparison with other intelligent optimization algorithms

Several common intelligent optimization algorithms are introduced below:

Ant Colony Algorithm

Particle Swarm Optimization Algorithm

Artificial Neural Network Algorithm

simulated annealing algorithm

fish swarm algorithm

1. Ant colony algorithm In 1992, Italian scholars Colorni A, Dorigo M, and Maniezzo V proposed the ant colony algorithm. In the process of finding the food source, the ant colony can always find a shortest path (from the ant nest to the food source), and the ant colony algorithm mainly uses the optimization ability of the ant colony to solve some difficult problems in discrete system optimization. Ant colony algorithm has been successfully applied to solve traveling salesman problem, scheduling problem, etc., and achieved good results. However, the study of ant colony has just started, without a solid mathematical foundation, and the algorithm still needs further research in terms of convergence and theoretical basis.
2. Particle swarm optimization algorithm Kenney and Eberhart proposed an optimization technique in 1995—the particle swarm optimization algorithm. It is mainly based on the research on the predation behavior of birds, and seeks the optimal solution by iterating a set of randomly generated initial solutions. The particle swarm optimization algorithm calls the solution of the optimization problem "particles". Each particle has a speed that determines the direction and distance of the particle's flight and an fitness value determined by the optimized function. In the solution space, all particles follow the The optimal particle motion is searched. In the iterative process, the particles constantly update themselves, which is mainly carried out by tracking two extreme values, one is the global extreme value, that is, the optimal solution found by the entire population so far, and the other is the individual extreme value, That is, the optimal solution found by the particle itself. Of course, only some particles can be used, and the extremum at this time is called a local extremum.
3. Artificial neural network algorithm
Artificial neural network is a relatively popular research field at present. Artificial neural network is a parallel distributed processing system established by using the neural network principle in biology. It is a network composed of many multi-input and single-output neurons. The input of neurons will have some connection channels. They are respectively Corresponding to a connection weight coefficient. The artificial neural network determines its weight through continuous learning. This learning process is called training, and its learning methods are divided into two types: learning with a tutor and learning without a tutor. The difference between these two methods is whether there is a target output corresponding to the input, and whether the training is guided. Learning with a tutor has target output and input, and guides the training, while learning without a tutor is the opposite.
4. Simulated annealing algorithm
In 1953, Metropolis proposed the idea of ​​simulated annealing, and Kirkpatrick introduced the idea of ​​annealing into the field of combinatorial optimization in 1983, and achieved success. The simulated annealing algorithm is an extension of the local search algorithm, so theoretically it is a global optimal algorithm. The simulated annealing algorithm is based on the annealing principle of solid matter. Matter undergoes disordered and intense thermal motion at high temperature, so at the beginning, let the solid have a higher temperature, and then slowly lower the temperature until the matter gradually becomes a balanced and ordered state, and finally the energy is the highest hours to reach the ground state, which is called annealing. The simulated annealing algorithm uses the Metropolis sampling criterion to search for the optimal solution at high temperature. This process is random. In this process, it uses the cooling process to perform repeated sampling, and then searches for similar optimal values. The simulated annealing algorithm can effectively find the global optimal solution, but it also has certain shortcomings, such as: slow convergence speed and so on.
5. Fish Swarm Algorithm
In 2002, Dr. Li Xiaolei proposed the fish swarm algorithm, which is another typical application of artificial intelligence. It performs global optimization by simulating the foraging and survival activities of fish swarms, that is, the algorithm first constructs the animal's Simple bottom-level behavior, and then perform local optimization on the individual, and gradually search for the global optimal value. The fish swarm algorithm has a strong ability to search for the global extremum, and can prevent the occurrence of local extremum; it has a strong adaptive ability for the search space; the convergence speed is fast, and it can quickly Feasible solutions can be obtained quickly; the requirements for the problem are not strict, no precise description is required, and no strict mechanism model is required, which makes the application range of the fish swarm algorithm more expanded. At present, the fish swarm algorithm has been widely used to solve problems such as continuous optimization problem system identification problem, combinatorial optimization problem, neural network training and power system reactive power optimization, and the effect is good.

3. Necessary knowledge (standing on the shoulders of predecessors)

When I was learning genetic algorithms, I also referred to a large number of blogs on CSDN and other websites, because I needed to reproduce the papers I was looking for, so I needed to understand its principles.

I've spent a lot of time looking for articles, but the ones that are really helpful are very limited. Most of the useful articles are found on CSDN. Here are some of my experience and links. After reading through these articles, I can reproduce them with the program. I believe that brothers and sisters can basically understand and can save you plenty of time. (Of course there are other good blogs, maybe I haven't found them yet)

Although many reference blogs are given above, the first two are the ones I recommend the most. After thoroughly understanding the first two blogs, you can understand the principle and code logic of the genetic algorithm. Other blogs can be used as supplementary explanations. Since this article does not talk about the basic knowledge of genetic algorithms, brothers and sisters are asked to complete this knowledge before reading this article. This article will give my understanding of genetic algorithms based on this.

4. Reproduction of python papers

The reproduced paper is the part of the genetic algorithm in reference 1. First, take a screenshot of part of the content of the article as follows: The first big head
is the objective function , and the objective function given in the article is:
insert image description here
among them, ti t_{i}tiSum A i A_{i}Aiis known, as shown in the figure below, ti t_{i}tiIndicates the i-th measurement moment, A i A_{i}AiIndicates the blood drug concentration value in the organ corresponding to the i-th moment:
insert image description here
The article gives the blood drug concentration values ​​of the three organs of the rat at the 10th, 30th, 60th, 120th, and 360th minute respectively. Therefore, in the above objective function, for each organ, I can build a model, m=5, ti t_{i}tiSum A i A_{i}Aiis known, at this time only p 1 p_{1}p1 p 2 p_{2} p2 q 1 q_{1} q1 q 2 q_{2} q2is unknown, and is also the four parameters we require, so we can get a parameter about pi p_{i}pi q i q_{i} qiThe quaternion function expression ξ \xiξ (p 1 p_{1}p1, p 2 p_{2} p2, q 1 q_{1} q1, q 2 q_{2} q2) , the goal is to find when ξ \xiWhen ξ reaches the minimum,p 1 p_{1}p1 p 2 p_{2} p2 q 1 q_{1} q1 q 2 q_{2} q2These four parameter values.

This article essentially seeks the minimum value of the quaternary function. The five sets of data need to be substituted separately and then summed. Therefore, the objective function can be set as:

dic_liver = {
    
    0.167: 0.681, 0.5: 0.436, 1: 0.709, 2: 0.263, 6: 0.12}  # 键表示时间(h),值表示肝内的浓度
dic_lung = {
    
    0.167: 1.069, 0.5: 0.689, 1: 0.666, 2: 0.342, 6: 0.162}  # 表示肺内的浓度
dic_stomach = {
    
    0.167: 4.827, 0.5: 3.866, 1: 1.67, 2: 1.638, 6: 0.798}  # 表示胃内的浓度的


def F(p1, p2, q1, q2):  # 设计目标函数 法一
    fun = 0
    for key, value in dic_liver.items():
        fun = ((p1 * np.exp(-q1 * key) + p2 * np.exp(-q2 * key)) - value) ** 2 + fun
    return fun


def F2(p1, p2, q1, q2):  # 设计目标函数 法二
    l1 = list(dic_liver.keys())
    l2 = list(dic_liver.values())
    result = [((p1 * np.exp(-q1 * i) + p2 * np.exp(-q2 * i)) - j) ** 2 for i, j in zip(l1, l2)]
    # result = sum(result)
    total = 0
    for i in range(len(result)):
        total = total + result[i]
    return total

Brief analysis: the first three dictionaries are the data given in the article, which describe the relationship between time and the corresponding blood drug concentration. This article uses the dictionary type, and uses the key to indicate the administration time (the article is in minutes, and it is divided here Normalized by 60, in h). I used two methods when designing the objective function, both of which use the dictionary type, but the first method is obviously more concise.

The fitness function can be set to (here is the fitness function corresponding to the minimum value):

def get_fitness(pop):
    p1, p2, q1, q2 = translateDNA(pop)
    pred = F(p1, p2, q1, q2)
    return -(pred - np.max(pred)) + 1e-3  # 要加上一个很小的正数

The second big head is the encoding and decoding process . Generally speaking, we should first randomly generate the initial population of decimal, then encode the decimal into binary for crossover, mutation and other operations, and then decode the binary into decimal. But in fact, the first step (randomly generating decimal numbers) can be removed, because we can directly randomly generate binary strings in the program, so the encoding process becomes very simple, and here we only need to focus on the decoding process. Take this article I reproduced as an example:

insert image description here
insert image description here

The binary length corresponding to one parameter is 20 bits, and there are 4 parameters in this article, so for the initial population of this binary, each row has (20*4) 80 bits. However, this paper requires that the initial population contains 150 individuals, in fact, there are 150 sets of solutions ( p 1 p_{1}p1, p 2 p_{2} p2, q 1 q_{1} q1, q 2 q_{2} q2), the number of columns is 150 and the number of rows is 80, so a 150*80 0-1 matrix should be formed in the end.

Note: If you have read the two blogs recommended above, you will know that a decimal number can be converted into binary with any number of digits according to "divide by 2, take the remainder, and arrange in reverse order", as long as the high bits are filled with 0. When the length of the binary is longer, the calculation is more complicated, but the accuracy is also higher.

insert image description here
The code to generate the binary preliminary population is as follows:

pop = np.random.randint(2, size=(POP_SIZE, DNA_SIZE * 4))  
# matrix (POP_SIZE, DNA_SIZE) POP_SIZE为150,DNA_SIZE为20

The decoding process is as follows:

def translateDNA(pop):  # 解码 pop表示种群矩阵,一行表示一个二进制编码表示的DNA,矩阵的行数为种群数目数为每个DNA长度*DNA个数
    p1_pop = pop[:, :20]
    p2_pop = pop[:, 20:40]
    q1_pop = pop[:, 40:60]
    q2_pop = pop[:, 60:]

    # 解码过程,并将pi和qi转换到规定的范围中去
    p1 = p1_pop.dot(2 ** np.arange(DNA_SIZE)[::-1]) / float(2 ** DNA_SIZE - 1) * (p1_BOUND[1] - p1_BOUND[0]) + p1_BOUND[
        0]
    p2 = p2_pop.dot(2 ** np.arange(DNA_SIZE)[::-1]) / float(2 ** DNA_SIZE - 1) * (p2_BOUND[1] - p2_BOUND[0]) + p2_BOUND[
        0]
    q1 = q1_pop.dot(2 ** np.arange(DNA_SIZE)[::-1]) / float(2 ** DNA_SIZE - 1) * (q1_BOUND[1] - q1_BOUND[0]) + q1_BOUND[
        0]
    q2 = q2_pop.dot(2 ** np.arange(DNA_SIZE)[::-1]) / float(2 ** DNA_SIZE - 1) * (q2_BOUND[1] - q2_BOUND[0]) + q2_BOUND[
        0]
    return p1, p2, q1, q2

After the two big heads have been modified according to their actual problems, the rest can basically be coded. But it should be noted that in the genetic algorithm, the position of crossover and mutation is random. For example, the length of my DNA sequence is 80, and the range of random number generation needs to be modified to (0,80) .
cross_points = np.random.randint(low=0, high=DNA_SIZE * 4)

Other lengths can be deduced by analogy.

The complete code is attached below. The relevant initial test parameter settings are all from the reproduction paper. Thanks again to the big brothers who have explained the genetic algorithm in detail and attached the python code implementation , which provided a lot of reference for my reproduction:

import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
import warnings

warnings.filterwarnings('ignore')

DNA_SIZE = 20  # DNA长度(二进制编码长度)
POP_SIZE = 150  # 初始种群数量
CROSSOVER_RATE = 0.95  # 交叉率
MUTATION_RATE = 0.005  # 变异率 将0.005改为0.01
N_GENERATIONS = 1000  # 进化代数 进化代数在 800—1200 之间比较适合,本文选取进化1000代
p1_BOUND = [0, 1]  # 确定参数的范围
p2_BOUND = [0, 1]
q1_BOUND = [0, 1]
q2_BOUND = [0, 1]

dic_liver = {
    
    0.167: 0.681, 0.5: 0.436, 1: 0.709, 2: 0.263, 6: 0.12}  # 键表示时间(h),值表示肝内的浓度
dic_lung = {
    
    0.167: 1.069, 0.5: 0.689, 1: 0.666, 2: 0.342, 6: 0.162}  # 表示肺内的浓度
dic_stomach = {
    
    0.167: 4.827, 0.5: 3.866, 1: 1.67, 2: 1.638, 6: 0.798}  # 表示胃内的浓度的


def F(p1, p2, q1, q2):  # 设计目标函数 法一
    fun = 0
    for key, value in dic_liver.items():
        fun = ((p1 * np.exp(-q1 * key) + p2 * np.exp(-q2 * key)) - value) ** 2 + fun
    return fun


def F2(p1, p2, q1, q2):  # 设计目标函数 法二
    l1 = list(dic_liver.keys())
    l2 = list(dic_liver.values())
    result = [((p1 * np.exp(-q1 * i) + p2 * np.exp(-q2 * i)) - j) ** 2 for i, j in zip(l1, l2)]
    # result = sum(result)
    total = 0
    for i in range(len(result)):
        total = total + result[i]
    return total

# 求最小值对应的适应度函数 
def get_fitness(pop):
    p1, p2, q1, q2 = translateDNA(pop)
    pred = F(p1, p2, q1, q2)
    return -(pred - np.max(pred)) + 1e-3  # 要加上一个很小的正数

def translateDNA(pop):  # 解码 pop表示种群矩阵,一行表示一个二进制编码表示的DNA,矩阵的行数为种群数目 即行数为150行,列数为每个DNA长度*DNA个数,即20*4=80列(150*80)
    p1_pop = pop[:, :20]
    p2_pop = pop[:, 20:40]
    q1_pop = pop[:, 40:60]
    q2_pop = pop[:, 60:]

    p1 = p1_pop.dot(2 ** np.arange(DNA_SIZE)[::-1]) / float(2 ** DNA_SIZE - 1) * (p1_BOUND[1] - p1_BOUND[0]) + p1_BOUND[
        0]
    p2 = p2_pop.dot(2 ** np.arange(DNA_SIZE)[::-1]) / float(2 ** DNA_SIZE - 1) * (p2_BOUND[1] - p2_BOUND[0]) + p2_BOUND[
        0]
    q1 = q1_pop.dot(2 ** np.arange(DNA_SIZE)[::-1]) / float(2 ** DNA_SIZE - 1) * (q1_BOUND[1] - q1_BOUND[0]) + q1_BOUND[
        0]
    q2 = q2_pop.dot(2 ** np.arange(DNA_SIZE)[::-1]) / float(2 ** DNA_SIZE - 1) * (q2_BOUND[1] - q2_BOUND[0]) + q2_BOUND[
        0]
    return p1, p2, q1, q2

# 以下函数包含两个功能,交叉和变异
def crossover_and_mutation(pop, CROSSOVER_RATE=0.95):  # 单点交叉
    new_pop = []
    for father in pop:  # 遍历种群中的每一个个体,将该个体作为父亲
        child = father  # 孩子先得到父亲的全部基因(这里我把一串二进制串的那些0,1称为基因)
        if np.random.rand() < CROSSOVER_RATE:  # 产生子代时不是必然发生交叉,而是以一定的概率发生交叉
            mother = pop[np.random.randint(POP_SIZE)]  # 再种群中选择另一个个体,并将该个体作为母亲
            cross_points = np.random.randint(low=0, high=DNA_SIZE * 4)  # 随机产生交叉的点
            child[cross_points:] = mother[cross_points:]  # 孩子得到位于交叉点后的母亲的基因
        mutation(child)  # 每个后代有一定的机率发生变异
        new_pop.append(child)

    return new_pop


# 基本位变异算子
def mutation(child, MUTATION_RATE=0.005):
    if np.random.rand() < MUTATION_RATE:  # 以MUTATION_RATE的概率进行变异
        mutate_point = np.random.randint(0, DNA_SIZE * 4)  # 随机产生一个实数,代表要变异基因的位置
        child[mutate_point] = child[mutate_point] ^ 1  # 将变异点的二进制为反转(异或运算符 1与1为0、1与0为1、0与0为0)


def select(pop, fitness):  # 描述了从np.arange(POP_SIZE)里选择每一个元素的概率,概率越高约有可能被选中,最后返回被选中的个体即可
    idx = np.random.choice(np.arange(POP_SIZE), size=POP_SIZE, replace=True,
                           p=(fitness) / (fitness.sum()))
    return pop[idx]

# # np.random.choice()函数的用法
# arr = ['pooh', 'rabbit', 'piglet', 'Christopher']
# np.random.choice(aa_milne_arr, size=11, p=[0.5, 0.1, 0.1, 0.3])


def print_info(pop):
    fitness = get_fitness(pop)
    min_fitness_index = np.argmin(fitness)  # 表示为array的最大值/最小值对应的索引
    print("min_fitness:", fitness[min_fitness_index])
    p1, p2, q1, q2 = translateDNA(pop)
    print("最优的基因型:", pop[min_fitness_index])
    print("(p1, p2, q1, q2):",
          (p1[min_fitness_index], p2[min_fitness_index], q1[min_fitness_index], q2[min_fitness_index]))

if __name__ == "__main__":

    pop = np.random.randint(2, size=(POP_SIZE, DNA_SIZE * 4))  # matrix (POP_SIZE, DNA_SIZE) POP_SIZE为150,DNA_SIZE为20
    for _ in range(N_GENERATIONS):  # 迭代N代
        pop = np.array(crossover_and_mutation(pop, CROSSOVER_RATE))  # 进行交叉和变异
        fitness = get_fitness(pop)
        pop = select(pop, fitness)  

    print_info(pop)

Compared with the original blog, for the parts that are relatively difficult to understand, I have added necessary comments to the code to facilitate everyone's understanding.

Taking the liver organ as an example, the optimal genotype is the binary digits corresponding to p1, p2, q1, and q2. The following is the result after decoding them into decimals: Although there is a certain discrepancy with the original
insert image description here
result , but the first step has been able to run out. We'll modify it later.

Crossover algorithms include single-point crossover, multi-point crossover, uniform crossover, etc. The method used in this paper is multi-point crossover. In the same way, the mutation operation is not the only one. The crossover and mutation used in this article are the most commonly used operations.

5. Improvement of Genetic Algorithm (Trailer)

To make the program faster and better, we need to improve it. Brothers and sisters, first understand this article. In the next article, I will synthesize some literature I found to introduce the improvement of the genetic algorithm and its python program implementation . The improvement of the genetic algorithm mainly has the following three parts:

  • Improvements to the fitness function
  • Improvements in Crossover Probability
  • Improvements in Mutation Probability

I don’t know if the code is easy to understand. If you need to reproduce the explanation of the paper, you can leave a message in the comment area or private message me. If there are many people, you can make an explanation video. I hope it will be helpful to all brothers and sisters!


  1. Li Yanmei. An improved genetic algorithm and its application [D]. South China University of Technology, 2012. ↩︎ ↩︎

  2. Sun Liang, Li Junli, Cheng Jianping. Application of a synthetic algorithm based on genetic algorithm and Gauss-Newton method in biokinetic data analysis of radiopharmaceuticals[J]. Nuclear Technology, 2006(12):927-931. Genetic algorithm
    python (including routine code and detailed explanation)
    Genetic Algorithm Detailed Explanation Attached with python code Realization of
    Genetic Algorithm Solving Banana Function Maximum Genetic
    Algorithm Detailed Explanation of Python Code Implementation and
    Example
    Analysis
    Parameter Optimization Genetic Algorithm (GA) Hyperparameter Optimization Python Implementation Genetic Algorithms (Genetic Algorithms
    ) comprehensive explanation and python implementation↩︎

Guess you like

Origin blog.csdn.net/golden_knife/article/details/128510731
Recommended