Simulated annealing algorithm of intelligent algorithm series

insert image description here

  The cover of this blog ChatGPT + DALL·E 2was co-authored.

foreword

  This article is the second article of the Intelligent Algorithm (Python Recurrence)(Simulate Anneal Algorithm, SAA) column. It mainly introduces the idea, implementation and simulation of related application scenarios of the simulated annealing algorithm python.

  The simulated annealing algorithm, as its name implies, is a simulation of the thermodynamic process of solid annealing. It is a random search algorithm suitable for solving large-scale combinatorial optimization problems. Different from the general local search algorithm, it SAAselects the state with a relatively small target value in the neighborhood with a certain probability. In theory, it is a global optimal algorithm.

1. Algorithm thinking

  The solid annealing process refers to a thermodynamic process in which a solid is heated to melt and then slowly cooled to solidify into a regular crystal. It mainly consists of heating process, isothermal process and cooling process 3.
  (1)Heating process: when a solid is heated, as the temperature rises, the thermal motion of the particles continues to strengthen, gradually deviates from the equilibrium position, and the arrangement of the particles also presents a random state. At this time, the object appears to be in a liquid state on a macroscopic level, which is the phenomenon of melting . The melting process eliminates the non-uniform state that may have existed in the system, and the energy of the system also increases with the increase of temperature; isothermal
  (2)process: the annealing process requires the temperature to decrease slowly, so that the system reaches an equilibrium state at each temperature. This process can be explained according to the law of free energy reduction: For a closed system that exchanges heat with the environment while the temperature remains constant, the spontaneous change of the system state always proceeds in the direction of free energy reduction, when the free energy reaches the minimum value When the system reaches the equilibrium state;
  (3)the cooling process: the decrease in temperature makes the thermal motion of the particles gradually weaken, the arrangement of the particles gradually becomes orderly, the energy of the system decreases continuously, and finally a low-energy crystal structure is obtained. The annealing process is complete when the liquid solidifies into a solid crystalline state.

  SAAIt is a probability-based algorithm used to find the optimal solution in a large search space. It uses a process similar to solid annealing. First, the solid is heated to a sufficient temperature (equivalent to the random search of the algorithm), and then slowly cooled (equivalent to Based on the local search of the algorithm), reach an equilibrium state at each temperature (equivalent to each state transition of the algorithm), and finally reach the physical ground state (equivalent to the algorithm finding the optimal solution).
  Specifically, it can be expressed as: particles at temperature TTThe probability of reaching equilibrium at T is exp ( − Δ E k T ) exp(- \frac {\Delta E} {kT})exp(kTE _) , among whichEEE is the temperatureTTInternal energy at T , ΔE ΔEΔE is its change,kkk is the Boltzmann constant. Using solid annealing to simulate combinatorial optimization problems, the internal energyEEE is modeled as the objective function valuefff , temperatureTTT evolves into the control parameterttt , that is, to obtain the solution combination optimization problemSAA:
  from the initial solutionxxx and the initial value of the control parameterttStart at t , "产生新解 --> 计算目标函数差 --> 接受或舍弃"repeatttThe t value, the current solution when the algorithm terminates is the approximate optimal solution obtained, which is a heuristic random search process based on the Monte Carlo iterative solution method. The annealing process is controlled by the cooling schedule, including the initial value of the control parameterttt and its attenuation factorΔt ΔtΔ t , eachttThe number of iterationsLL at the value of tL and stop conditionSSS et al.

insert image description here

2. Sorting out the details

2.1 Selection of hyperparameters

TIt is recommended to choose a larger value for the initial temperature, and   a smaller value for the termination temperature T_end. The initial temperature is selected here. T=100, T_end=0.001Too large or too small will affect the speed of algorithm convergence; the number of iterations and cooling coefficient at each temperature can be appropriate according to the problem scenario Control, too large will also affect the speed of convergence; Boltzmann's constant kis set to 1.

2.2 Some tricks

  In fact, it is not necessary to follow the flow chart above completely SAA. For example, the number of iterations at each temperature, in principle, this part affects the number of iterations. If the cooling coefficient is set to be slightly larger, for example, then 0.99this Some parts can be omitted during implementation, and the algorithm can still obtain the optimal solution. Of course, the blogger is only a conclusion drawn on this issue, and whether it is universal still needs to be verified. For the integrity of the algorithm, this article still implements the algorithm according to the flow chart SAA.

3. Algorithm implementation

3.1 Problem scenario

  The most value problem, solve f ( x ) = xsin ( 5 x ) − xcos ( 2 x ) f(x) = xsin(5x) - xcos(2x)f(x)=x s in ( 5 x )The minimum value of x cos ( 2 x ) on the domain[0, 5]. Let's calculate it manually:

f ′ ( x ) = 2 x s i n ( 2 x ) + s i n ( 5 x ) − c o s ( 2 x ) + 5 x c o s ( 5 x ) f^\prime (x) = 2 x sin(2 x) + sin(5 x) - cos(2 x) + 5 x cos(5 x) f(x)=2 x s in ( 2 x )+s in ( 5 x )cos(2x)+5xcos(5x)  令 f ′ ( x ) = 0 f^\prime (x) = 0 f(x)=After 0 , the stagnation point can be obtained theoretically, but it is not easy to calculate. . .

3.2 Analysis from the perspective of algorithm

  According to the above problem scenarios and algorithm principles, two situations need to be considered:
  (1)the current solution is a local optimal solution, that is, f ( x ′ ) < f ( x ) f(x^ \prime) < f(x)f(x)<f ( x ) , retain the current local optimal solution, and continue to generate new solutions;
  (2)the current solution is not a local optimal solution, that is,f ( x ′ ) ≥ f ( x ) f(x^ \prime) \geq f(x)f(x)f ( x ) , calculate the probability of the convergence of the solution at the current temperature, if the probability is greater than a certain threshold (random), then the solution can be used as a local optimal solution, keep the solution and continue to generate a new solution, otherwise discard the solution , continue to generate new solutions.

3.3 python implementation

# -*- coding:utf-8 -*-
# Author:   xiayouran
# Email:    [email protected]
# Datetime: 2023/1/16 11:12
# Filename: sa.py
import numpy as np
from matplotlib import pyplot as plt

def f(x):
    return x*np.sin(5*x) - x*np.cos(2*x)

seed = 10086
np.random.seed(seed)

T = 100     # 初始温度
T_end = 1e-3    # 终止温度
coldrate = 0.9    # 冷却系数
max_count = 15  # 每个温度值下的迭代次数
x_range = [0, 5]    # 定义域

if __name__ == '__main__':
    plt.figure()
    plt.ion()
    x_ = np.linspace(*x_range, num=200)
    plt.plot(x_, f(x_))

    x = np.random.uniform(*x_range)  # 初始解
    while T > T_end:
        for _ in range(max_count):
            y = f(x)
            x_new = np.clip(x + np.random.randn(), a_min=x_range[0], a_max=x_range[1])

            # something about plotting
            if 'sca' in globals() or 'sca' in locals():
                sca.remove()
            sca = plt.scatter(x, y, s=100, lw=0, c='red', alpha=0.5)
            plt.pause(0.01)

            y_new = f(x_new)
            if y_new < y:  # 局部最优解
                x = x_new
            else:
                p = np.exp(-(y_new - y) / T)  # 粒子在温度T时趋于平衡的概率为exp[-ΔE/(kT)]
                r = np.random.uniform(0, 1)
                if p > r:  # 以一定概率来接受最优解
                    x = x_new
        T *= coldrate

    plt.scatter(x, f(x), s=100, lw=0, c='green', alpha=0.7)
    plt.ioff()
    plt.show()
    print('最小值对应的坐标点: ({}, {})'.format(x, f(x)))

  The optimal solution obtained is as follows:

最小值对应的坐标点: (3.435632058805234, -6.276735466829619)

  The simulation process is as follows:

insert image description here

Code repository: IALib[GitHub]

  The code of this article has been synchronized to Pythonthe exclusive warehouse of the [Smart Algorithm (Recurrence)] column: Algorithms in the IALib
  runtime library:IALibSAA

git clone [email protected]:xiayouran/IALib.git
cd examples
python main.py -algo saa

Guess you like

Origin blog.csdn.net/qq_42730750/article/details/129523998