Commonly used optimization algorithms (simulated annealing, genetic algorithm, particle swarm algorithm) and their implementation in Python

Table of contents

simulated annealing algorithm

step

Python implementation

genetic algorithm

step

Python implementation

particle swarm algorithm

Particle Swarm Optimization Algorithm

step

Python implementation

recommended reading



simulated annealing algorithm

Simulated Annealing algorithm (Simulated Annealing) is a global optimization algorithm, usually used to solve complex non-convex optimization problems. The basic idea is to accept inferior solutions with a certain probability, so as to avoid falling into the local optimal solution, so as to search for the optimal solution in the global scope.

step

The steps of the simulated annealing algorithm are as follows:

In each cooling cycle, the probability of accepting an inferior solution gradually decreases as the temperature drops, thereby gradually converging to the global optimal solution. However, the effects and results of the simulated annealing algorithm largely depend on the settings of parameters such as initial temperature, annealing rate and termination conditions.

Python implementation

The following takes solving the global minimum of a one-variable function    as an example to demonstrate how to use Python to implement the simulated annealing algorithm.

First define the objective function:

import math
import random
def func(x):
    return x ** 2 + math.sin(5 * x)

Just take a look at what the function looks like:

import matplotlib.pyplot as plt
import numpy as np
xarray = np.linspace(-5,5,10000)
plt.plot(xarray,[func(x) for x in xarray])

Define simulated annealing algorithm

def simulated_annealing(func, x0, T0, r, iter_max, tol):
    '''
    func 是目标函数
    x0 是初始解
    T0 是初始温度
    r 是退火速率
    iter_max 是最大迭代次数
    tol 是温度下限
    '''
    
    x_best = x0
    f_best = func(x0)
    T = T0
    iter = 0
    while T > tol and iter < iter_max: # 判断是否达到停止条件
        x_new = x_best + random.uniform(-1, 1) * T # 生成新解
        f_new = func(x_new) # 计算目标函数值(适应度值)
        delta_f = f_new - f_best # 能量差
        if delta_f < 0 or random.uniform(0, 1) < math.exp(-delta_f / T): # 决定是否接受
            x_best, f_best = x_new, f_new
        T *= r # 降温 
        iter += 1 # 增加迭代次数
    return x_best, f_best

Set the initial parameters and solve:

x0 = 2
T0 = 100
r = 0.95
iter_max = 10000
tol = 0.0001
x_best, f_best = simulated_annealing(func, x0, T0, r, iter_max, tol)
print("x_best = {:.4f}, f_best = {:.4f}".format(x_best, f_best))

The result is:x_best = -0.2906, f_best = -0.9086

The result is still relatively ideal.

genetic algorithm

Genetic algorithm is an optimization algorithm inspired by biological evolution, which simulates the process of biological evolution, and gradually optimizes the solution of the problem through natural selection and genetic operations. Genetic algorithm is a commonly used optimization algorithm, which is suitable for many practical problems that are not easy to solve. It has good global search ability, strong adaptability and robustness, but it also has some disadvantages, such as slow convergence speed, and may fall into local optimal solution.

step

The basic steps of a genetic algorithm include:

  • Initial population: According to the characteristics and requirements of the problem, a certain number of solutions are randomly generated as the initial population.

  • Evaluation of fitness: According to the evaluation function of the problem, the fitness of each solution is evaluated for subsequent selection and genetic operations.

  • Selection operation: According to the size of fitness, select a certain number of individuals as the parents of the next generation population.

  • Genetic operation: Through operations such as crossover and mutation, the offspring of the next generation population are generated.

  • Repeat steps 2~4 until the stopping condition is met (such as reaching a certain number of algebras, finding the optimal solution, etc.).

The following is a simple example to introduce the application process of genetic algorithm.

As a function  , we require the largest integer value of this function on the interval [0,15]. First, we need to initialize the population. Assuming that the gene length of each individual is 4 (that is, an individual is represented by 4 binary numbers, such as 0010, which means 2), then 4 binary numbers can be randomly generated, such as 1101, 0110, 0011, 0001, etc., as the initial population . According to these individuals, we can obtain the corresponding function values ​​by converting to decimal numbers. For example, the decimal number corresponding to 1101 is 13, and f(13)=242 is obtained by substituting into the function, which is the fitness of individual 1101.

Next, make a selection. Commonly used selection operations include roulette selection, competitive selection, etc. Here we use roulette selection, assign individuals to the roulette according to their fitness, and then randomly select a certain number of individuals as parents.

Then, genetic manipulations (crossover, mutation, etc.) are performed. Here we use one-point crossover and bit mutation. Assuming that individuals 1101 and 0011 are randomly selected for crossover, the crossover point is the second place, and the offspring 1111 and 0001 are obtained after crossover. Then, we bit-mutate the offspring, i.e. pick a bit at random and invert it. For example, the third bit of 1111 is mutated, and the offspring 1011 is obtained after the mutation.

Finally, fitness is evaluated, comparing the fitness of the parent and offspring. Suppose the fitness of offspring 1111 and offspring 1011 are f(15)=260 and f(11)=142 respectively. Compared with the parent generation, individuals with higher fitness in the offspring will be selected as members of the next generation population.

Repeat the above steps until the stop condition is met.

Python implementation

import math

def func(x):
    return x**2 + math.sin(5*x)

def fitness(x):
    return 30-(x**2 + math.sin(5*x))
import random

POPULATION_SIZE = 50
GENE_LENGTH = 16

def generate_population(population_size, gene_length):
    population = []
    for i in range(population_size):
        individual = [random.randint(0, 1) for j in range(gene_length)]
        population.append(individual)
    return population

population = generate_population(POPULATION_SIZE, GENE_LENGTH)
def crossover(parent1, parent2):
    crossover_point = random.randint(0, GENE_LENGTH - 1)
    child1 = parent1[:crossover_point] + parent2[crossover_point:]
    child2 = parent2[:crossover_point] + parent1[crossover_point:]
    return child1, child2

def mutation(individual, mutation_probability):
    for i in range(GENE_LENGTH):
        if random.random() < mutation_probability:
            individual[i] = 1 - individual[i]
    return individual

def select_parents(population):
    total_fitness = sum([fitness(decode(individual)) for individual in population])
    parent1 = None
    parent2 = None
    while parent1 == parent2:
        parent1 = select_individual(population, total_fitness)
        parent2 = select_individual(population, total_fitness)
    return parent1, parent2

def select_individual(population, total_fitness):
    r = random.uniform(0, total_fitness)
    fitness_sum = 0
    for individual in population:
        fitness_sum += fitness(decode(individual))
        if fitness_sum > r:
            return individual
    return population[-1]
def decode(individual):
    x = sum([gene*2**i for i, gene in enumerate(individual)])
    return -5 + 10 * x / (2**GENE_LENGTH - 1)

GENERATIONS = 100
CROSSOVER_PROBABILITY = 0.8
MUTATION_PROBABILITY = 0.05

def genetic_algorithm():
    population = generate_population(POPULATION_SIZE, GENE_LENGTH)
    for i in range(GENERATIONS):
        new_population = []
        for j in range(int(POPULATION_SIZE/2)):
            parent1, parent2 = select_parents(population)
            if random.random() < CROSSOVER_PROBABILITY:
                child1, child2 = crossover(parent1, parent2)
            else:
                child1, child2 = parent1, parent2
            child1 = mutation(child1, MUTATION_PROBABILITY)
            child2 = mutation(child2, MUTATION_PROBABILITY)
            new_population.append(child1)
            new_population.append(child2)
        population = new_population
    best_individual = max(population, key=lambda individual: fitness(decode(individual)))
    best_fitness = fitness(decode(best_individual))
    best_x = decode(best_individual)
    best_func = func(best_x)
    return best_x, best_fitness,best_func

best_x, best_fitness,best_func = genetic_algorithm()

print("x = ", best_x)
print("最大适应度为", best_fitness)
print("函数值为",best_func)

Note that I set the fitness function fitness different from the function value here, because we want to find the minimum value, and the fitness function is the bigger the better in this algorithm, so I made a slight adjustment.

particle swarm algorithm

Particle Swarm Optimization Algorithm

Particle Swarm Optimization (PSO) is a commonly used optimization algorithm. It is an evolutionary computing technique, which originated from the study of bird predation behavior. The algorithm seeks the optimal solution by simulating information exchange and cooperation in bird predation behavior. Specifically, the algorithm randomly generates a certain number of "particles" in the solution space, each particle represents a solution, and then continuously adjusts the position and speed of each particle to make them move towards the optimal solution, thereby gradually close to the optimal solution.

step

  • (1) Initially set the random position and velocity of the particle swarm according to the initialization process;

  • (2) Calculate the fitness value of each particle;

  • (3) For each particle, compare its fitness value with the fitness value of the best position it has experienced   , and if it is better, take it as the current best position; 

  • (4) For each particle, compare its fitness value with the fitness value of the best position experienced globally    , and if it is better, take it as the current global best position;

  • (5) Evolving the velocity and position of the particle according to two iterative formulas;

  • (6) If the end condition is not reached, it is usually a good enough fitness value or reaches a preset maximum algebra (Gmax), return to step (2); otherwise, execute step (7)

  • (7) output gbest.

Python implementation

Or find the minimum value of the function on [-5,5]

import numpy as np

def evaluate_fitness(x):
    return x ** 2 + np.sin(5*x)

class PSO:
    def __init__(self, n_particles, n_iterations, w, c1, c2, bounds):
        self.n_particles = n_particles
        self.n_iterations = n_iterations
        self.w = w
        self.c1 = c1
        self.c2 = c2
        self.bounds = bounds

        self.particles_x = np.random.uniform(bounds[0], bounds[1], size=(n_particles,))
        self.particles_v = np.zeros_like(self.particles_x)
        self.particles_fitness = evaluate_fitness(self.particles_x)
        self.particles_best_x = self.particles_x.copy()
        self.particles_best_fitness = self.particles_fitness.copy()

        self.global_best_x = self.particles_x[self.particles_fitness.argmin()]

    def update_particle_velocity(self):
        r1 = np.random.uniform(size=self.n_particles)
        r2 = np.random.uniform(size=self.n_particles)

        self.particles_v = self.w * self.particles_v + \
            self.c1 * r1 * (self.particles_best_x - self.particles_x) + \
            self.c2 * r2 * (self.global_best_x - self.particles_x)

        self.particles_v = np.clip(self.particles_v, -1, 1)

    def update_particle_position(self):
        self.particles_x = self.particles_x + self.particles_v

        self.particles_x = np.clip(self.particles_x, self.bounds[0], self.bounds[1])

        self.particles_fitness = evaluate_fitness(self.particles_x)

        better_mask = self.particles_fitness < self.particles_best_fitness
        self.particles_best_x[better_mask] = self.particles_x[better_mask]
        self.particles_best_fitness[better_mask] = self.particles_fitness[better_mask]

        best_particle = self.particles_fitness.argmin()
        if self.particles_fitness[best_particle] < evaluate_fitness(self.global_best_x):
            self.global_best_x = self.particles_x[best_particle]

    def run(self):
        for i in range(self.n_iterations):
            self.update_particle_velocity()
            self.update_particle_position()

            #print("Iteration:", i, "Global Best:", self.global_best_x)

        return self.global_best_x

pso = PSO(n_particles=20, n_iterations=50, w=0.7, c1=1.4, c2=1.4, bounds=(-5, 5))
global_best_x = pso.run()

In the above code, a function is first defined  evaluate_fitness to calculate the fitness value, which is actually the function value. Then, a PSO class is defined to implement the particle swarm optimization algorithm. During initialization, a certain number of particles are randomly generated and their fitness values ​​are calculated. Then, in each iteration, the velocity and position of each particle are updated separately, and the optimal position of each particle and the global optimal position are updated. Finally, we output the final globally optimal position.

The optimal solution is: -0.290836630206147.

recommended reading

Basic Principles of Genetic Algorithm (GA) (qq.com)

Basic Principles of Simulated Annealing Algorithm (SA) (qq.com)

Particle Swarm Optimization (PSO) Basic Principles (qq.com)

Well, the above is the introduction of three important optimization algorithms.

Guess you like

Origin blog.csdn.net/weixin_64338372/article/details/130024634