Algorithm implementation of simulated annealing algorithm and genetic algorithm for solving multi-objective optimization problems (mathematical modeling)

1. Simulated annealing algorithm

The simulated annealing algorithm is a global optimization algorithm, and the problem it solves is usually to find a global optimal solution that minimizes (or maximizes) a certain function. It searches the solution space by simulating the process of physical annealing. At the beginning, the initial solution is randomly generated at a certain temperature, and then the temperature is lowered step by step. At the same time, a new solution is randomly searched around the current solution, and a worse one is accepted according to a certain probability. solution, so that it is possible to jump out of the local optimal solution and finally obtain the global optimal solution.

Let's look at a simple example below. Suppose we want to solve the global minimum of the objective function f(x,y)=sin⁡(10x)+cos⁡(3y), take −2≤x≤2, −1≤y≤ 1 as the search range. We can do this with the following code:

import math
import random

# 定义目标函数
def objective_function(x, y):
    return math.sin(10*x) + math.cos(3*y)

# 定义模拟退火算法
def simulated_annealing(initial_temperature, cooling_rate, num_iterations):
    # 设置初始解和初始温度
    current_solution = [random.uniform(-2, 2), random.uniform(-1, 1)]
    current_energy = objective_function(current_solution[0], current_solution[1])
    current_temperature = initial_temperature

    # 迭代固定次数
    for i in range(num_iterations):
        # 根据当前温度随机生成新的解
        new_solution = [current_solution[0] + 0.1*random.uniform(-1, 1),
                        current_solution[1] + 0.1*random.uniform(-1, 1)]
        new_energy = objective_function(new_solution[0], new_solution[1])

        # 计算能量差
        delta_energy = new_energy - current_energy

        # 如果新解更优,则接受它
        if delta_energy < 0:
            current_solution = new_solution
            current_energy = new_energy
        # 否则以一定概率接受更差的解
        else:
            probability = math.exp(-delta_energy / current_temperature)
            if random.uniform(0, 1) < probability:
                current_solution = new_solution
                current_energy = new_energy

        # 降低温度
        current_temperature *= cooling_rate

    return current_solution, current_energy

# 设置初始温度、冷却速率和迭代次数
initial_temperature = 100
cooling_rate = 0.95
num_iterations = 1000

# 运行模拟退火算法
best_solution, best_energy = simulated_annealing(initial_temperature, cooling_rate, num_iterations)

# 输出结果
print("全局最优解:", best_solution)
print("全局最优值:", best_energy)

In this example, we used objective_functionthe function to define the objective function. Then simulated_annealingthe function to realize the core part of the simulated annealing algorithm, in which the parameter initial_temperaturerepresents the initial temperature, cooling_raterepresents the ratio of temperature reduction per iteration, and num_iterationsrepresents the number of iterations. In simulated_annealingthe function , we use the current temperature and energy difference to decide whether to accept the new solution, and whether to accept it when the new solution is poor. These are the core steps of the simulated annealing algorithm.

Finally, we set the initial temperature, cooling rate and number of iterations, and call simulated_annealingthe function to run the simulated annealing algorithm, get the global optimal solution and optimal value, and output them to the console. Parameters can be tuned over multiple runs to get more precise results.

2. Genetic algorithm

If there are multiple objective functions, a multi-objective function optimization algorithm can be used. One of the more commonly used algorithms is NSGA-II (Non-dominated Sorting Genetic Algorithm II), which is an algorithm that uses genetic algorithms to solve multi-objective optimization problems.

The core idea of ​​the NSGA-II algorithm is to find non-dominated solutions by maintaining a Pareto front, and then perform selection and crossover operations on these solutions to generate the next generation population. The specific steps are as follows:

  1. Initialize the population, and calculate the fitness value, Pareto rank and crowding distance of each individual.
  2. Perform Pareto sorting, sort all individuals in the population according to the Pareto level from small to large, and then sort the individuals of the same level from large to small according to the crowding distance.
  3. Select a part of high-quality individuals as parents, and perform crossover and mutation operations to generate the next generation population.
  4. Repeat the above steps until the stop condition is met.

The following is a sample code that uses Python to implement the NSGA-II algorithm to solve multi-objective problems:

import random
import copy

# 定义目标函数
def objective_function(population):
    fitness = []
    for x in population:
        obj_1 = pow(x[0], 2)
        obj_2 = pow(x[0]-2, 2) + pow(x[1], 2)
        # 将两个目标函数值合并成一个列表
        fitness.append([obj_1, obj_2])
    return fitness

# 定义帕累托排序
def pareto_ranking(fitness):
    n = len(fitness)
    p = []
    rank = [0] * n
    S = [[] for i in range(n)]
    F = [[] for i in range(n+1)]
    for i in range(n):
        S[i] = []
        rank[i] = 0
        for j in range(n):
            if i != j:
                if fitness[i][0] <= fitness[j][0] and fitness[i][1] <= fitness[j][1]:
                    if j not in S[i]:
                        S[i].append(j)
                elif fitness[j][0] <= fitness[i][0] and fitness[j][1] <= fitness[i][1]:
                    rank[i] += 1
        if rank[i] == 0:
            F[0].append(i)
    i = 0
    while len(F[i]) > 0:
        Q = []
        for j in range(len(F[i])):
            p_j = F[i][j]
            for k in range(len(S[p_j])):
                q = S[p_j][k]
                rank[q] -= 1
                if rank[q] == 0:
                    Q.append(q)
        i += 1
        F[i] = copy.deepcopy(Q)
    del F[len(F)-1]
    for f in F:
        for x in f:
            p.append(x)
    return p

# 定义拥挤距离
def crowding_distance(fitness, indices):
    n = len(indices)
    distance = [0.0] * n
    for m in range(2):
        sorted_indices = sorted(indices, key=lambda x:fitness[x][m])
        distance[sorted_indices[0]] = float('inf')
        distance[sorted_indices[n-1]] = float('inf')
        for i in range(1, n-1):
            distance[sorted_indices[i]] += (fitness[sorted_indices[i+1]][m] - fitness[sorted_indices[i-1]][m])
    return distance

# 定义选择操作
def selection(population, fitness, num_parents):
    parents = []
    n = len(population)
    indices = [i for i in range(n)]
    for i in range(num_parents):
        front = pareto_ranking(fitness)
        distance = crowding_distance(fitness, front)
        max_distance_index = indices[front[distance.index(max(distance))]]
        parents.append(population[max_distance_index])
        indices.remove(max_distance_index)
    return parents

# 定义交叉和变异操作
def crossover(parents, offspring_size):
    offspring = []
    for i in range(offspring_size):
        parent_1 = random.choice(parents)
        parent_2 = random.choice(parents)
        child = [parent_1[j] if random.random() < 0.5 else parent_2[j]
                 for j in range(len(parent_1))]
        offspring.append(child)
    return offspring

def mutation(offspring_crossover):
    for i in range(len(offspring_crossover)):
        if random.random() < 0.1:
            offspring_crossover[i][0] += random.uniform(-0.5, 0.5)
        if random.random() < 0.1:
            offspring_crossover[i][1] += random.uniform(-0.5, 0.5)
    return offspring_crossover

# 设置算法参数
num_generations = 50
population_size = 100
num_parents = 20
offspring_size = population_size - num_parents

# 初始化种群
population = [[random.uniform(-5, 5), random.uniform(-5, 5)] for i in range(population_size)]
for i in range(num_generations):
    # 计算适应度值和帕累托等级
    fitness = objective_function(population)
    # 选择操作
    parents = selection(population, fitness, num_parents)
    # 交叉操作
    offspring_crossover = crossover(parents, offspring_size)
    # 变异操作
    offspring_mutation = mutation(offspring_crossover)
    # 将父代和后代合并成一个种群
    population = parents + offspring_mutation
    # 输出当前最优解
    best_individual_index = pareto_ranking(fitness)[0]
    print("Generation ", i+1, ": Most optimal solution is ", population[best_individual_index])

# 输出所有 Pareto 最优解
pareto_front = pareto_ranking(fitness)
print("\nPareto front:")
for i in pareto_front:
    print(population[i], objective_function([population[i]])[0])

In this example, we still use Python to implement a multi-objective problem with two objective functions. First, objective_functionthe function , which takes a population and returns two objective function values ​​for each individual. Then pareto_rankingthe function crowding_distanceto calculate Pareto rank and crowding distance. Among them, pareto_rankingthe function is used to perform Pareto sorting on the population to obtain the Pareto level of each individual, and crowding_distancethe function is used to calculate the crowding distance of each individual. Finally, functions , functions, and functions selectionare defined to perform selection, crossover, and mutation operations, which are common operations for genetic algorithms.crossovermutation

In the main function, we implement the NSGA-II algorithm using the above functions, and use the Pareto front of the population to output all feasible solutions. The algorithm can be tuned by modifying parameters such as population size, number of iterations, etc.

3. Difference and connection

Simulated Annealing (SA) and NSGA-II Genetic Algorithm (Non-dominated Sorting Genetic Algorithm II) are two different optimization algorithms, and they have the following differences:

  1. Algorithms are different

    The SA algorithm is a heuristic random search algorithm based on the annealing process of simulated solid matter, which can gradually approach the global optimal solution under the probability of accepting an inferior solution. The NSGA-II algorithm is a multi-objective genetic algorithm, mainly for multi-objective optimization problems, by maintaining the Pareto front to find non-dominated solutions.

  2. Different application scenarios

    The SA algorithm is suitable for seeking the global optimal solution of single-objective optimization problems, especially when the search space is small or there is no obvious analytical solution. The NSGA-II algorithm is aimed at multi-objective optimization problems, which can simultaneously process multiple objective functions and generate a series of Pareto optimal solutions on the Pareto frontier.

  3. different optimization methods

    The SA algorithm achieves the purpose of controlling the probability of accepting inferior solutions by changing the temperature, and at the same time allows jumping out of the local optimal solution, so as to search the solution space in the global scope. The NSGA-II algorithm mainly generates the next generation population through operations such as selection, crossover and mutation, and maintains the Pareto optimal solution through Pareto sorting.

  4. Algorithms vary in complexity

    The time complexity of the SA algorithm is related to the temperature drop rate, and the complexity is usually low, but it may require a large number of iterations to converge to the global optimal solution. The time complexity of the NSGA-II algorithm is mainly affected by factors such as the population size and the operation of generating the next generation population, and is usually more complex than the SA algorithm.

In general, simulated annealing algorithm and NSGA-II genetic algorithm are relatively common optimization algorithms, and their applicable problem types and search strategies are different, and the appropriate algorithm can be selected according to the specific situation.

Guess you like

Origin blog.csdn.net/m0_62338174/article/details/130441053