2023 National Competition Mathematical Modeling Ideas - Case: Annealing Algorithm

## 0 Ideas for the competition

(Share on CSDN as soon as the competition questions come out)

https://blog.csdn.net/dc_sinor?type=blog

1 Principle of annealing algorithm

1.1 Physical background

In thermodynamics, the annealing phenomenon refers to the physical phenomenon in which an object gradually cools down. The lower the temperature, the lower the energy state of the object; when it is low enough, the liquid begins to condense and crystallize. In the crystalline state, the energy state of the system is the lowest. When nature slowly cools down (that is, anneals), it can "find" the lowest energy state: crystallization. However, if the process is too fast and rapid, rapid cooling (also known as "quenching") will result in an amorphous state that is not the lowest energy state.

As shown in the figure below, first (left) the object is in an amorphous state. We heat the solid to a sufficiently high temperature (middle image) and allow it to cool slowly, or anneal (right image). When heated, the internal particles of the solid become disordered with the temperature rise, and the internal energy increases, but when slowly cooled, the particles gradually become orderly, and reach an equilibrium state at each temperature, and finally reach the ground state at room temperature, and the internal energy decreases. is the smallest (at this point the object appears in crystal form).

insert image description here

1.2 Mathematical Model Behind

If you are still confused about the physical meaning of annealing, it doesn't matter that we have a simpler way to understand it. Imagine if we now have a function like the following, and now we want to find the (global) optimal solution of the function. If the Greedy strategy is adopted, then start testing from point A, and if the function value continues to decrease, then the testing process will continue. And when we reach point B, obviously our exploration process is over (because no matter which direction we work hard, the result will only get bigger and bigger). In the end, we can only find a partial final solution B.

insert image description here

According to the Metropolis criterion, the probability that a particle tends to balance at temperature T is exp(-ΔE/(kT)), where E is the internal energy at temperature T, ΔE is its change number, and k is Boltzmann's constant. The Metropolis criterion is often expressed as
insert image description here

The Metropolis criterion shows that when the temperature is T, the probability of a temperature drop with an energy difference of dE is P(dE), expressed as: P(dE) = exp( dE/(kT) ). Where k is a constant, exp represents the natural exponent, and dE<0. So P and T are positively correlated. This formula means: the higher the temperature, the greater the probability of a temperature drop with an energy difference of dE; the lower the temperature, the smaller the probability of a temperature drop. And because dE is always less than 0 (because the annealing process is a process of gradually decreasing temperature), so dE/kT < 0, so the function value range of P(dE) is (0,1). As the temperature T decreases, P(dE) will gradually decrease.

We regard a move to a poorer solution as a temperature jump process, and we accept such a move with probability P(dE). That is to say, when using solid annealing to simulate the combinatorial optimization problem, the internal energy E is simulated as the objective function value f, and the temperature T is evolved into the control parameter t, that is, the simulated annealing algorithm for solving the combinatorial optimization problem is obtained: from the initial solution i and the control Starting from the initial value of the parameter t, repeat the iteration of "generating a new solution→calculating the difference of the objective function→accepting or discarding" for the current solution, and gradually decaying the value of t. The current solution when the algorithm terminates is the approximate optimal solution obtained, which is based on A heuristic random search procedure for the Monte Carlo iterative solution method. The annealing process is controlled by the cooling schedule (Cooling Schedule), including the initial value t of the control parameter and its decay factor Δt, the number of iterations L and the stop condition S for each t value.

2 Implementation of annealing algorithm

2.1 Algorithm process

(1) Initialization: initial temperature T (sufficiently large), initial solution state S (starting point of algorithm iteration), iteration times L for each T value (2) For k=1, ..., L do the first (3
) Go to step 6:
(3) Generate a new solution S′
(4) Calculate the increment Δt′=C(S′)-C(S), where C(S) is the evaluation function
(5) If Δt′<0 then Accept S' as the new current solution, otherwise accept S' as the new current solution with probability exp(-Δt'/T).
(6) If the termination condition is met, output the current solution as the optimal solution and end the program.
The termination condition is usually chosen to terminate the algorithm when several consecutive new solutions are not accepted.
(7) T gradually decreases, and T->0, then turn to the second
insert image description here

2.2 Algorithm implementation

import numpy as np
import matplotlib.pyplot as plt
import random

class SA(object):

    def __init__(self, interval, tab='min', T_max=10000, T_min=1, iterMax=1000, rate=0.95):
        self.interval = interval                                    # 给定状态空间 - 即待求解空间
        self.T_max = T_max                                          # 初始退火温度 - 温度上限
        self.T_min = T_min                                          # 截止退火温度 - 温度下限
        self.iterMax = iterMax                                      # 定温内部迭代次数
        self.rate = rate                                            # 退火降温速度
        #############################################################
        self.x_seed = random.uniform(interval[0], interval[1])      # 解空间内的种子
        self.tab = tab.strip()                                      # 求解最大值还是最小值的标签: 'min' - 最小值;'max' - 最大值
        #############################################################
        self.solve()                                                # 完成主体的求解过程
        self.display()                                              # 数据可视化展示

    def solve(self):
        temp = 'deal_' + self.tab                                   # 采用反射方法提取对应的函数
        if hasattr(self, temp):
            deal = getattr(self, temp)
        else:
            exit('>>>tab标签传参有误:"min"|"max"<<<')
        x1 = self.x_seed
        T = self.T_max
        while T >= self.T_min:
            for i in range(self.iterMax):
                f1 = self.func(x1)
                delta_x = random.random() * 2 - 1
                if x1 + delta_x >= self.interval[0] and x1 + delta_x <= self.interval[1]:   # 将随机解束缚在给定状态空间内
                    x2 = x1 + delta_x
                else:
                    x2 = x1 - delta_x
                f2 = self.func(x2)
                delta_f = f2 - f1
                x1 = deal(x1, x2, delta_f, T)
            T *= self.rate
        self.x_solu = x1                                            # 提取最终退火解

    def func(self, x):                                              # 状态产生函数 - 即待求解函数
        value = np.sin(x**2) * (x**2 - 5*x)
        return value

    def p_min(self, delta, T):                                      # 计算最小值时,容忍解的状态迁移概率
        probability = np.exp(-delta/T)
        return probability

    def p_max(self, delta, T):
        probability = np.exp(delta/T)                               # 计算最大值时,容忍解的状态迁移概率
        return probability

    def deal_min(self, x1, x2, delta, T):
        if delta < 0:                                               # 更优解
            return x2
        else:                                                       # 容忍解
            P = self.p_min(delta, T)
            if P > random.random(): return x2
            else: return x1

    def deal_max(self, x1, x2, delta, T):
        if delta > 0:                                               # 更优解
            return x2
        else:                                                       # 容忍解
            P = self.p_max(delta, T)
            if P > random.random(): return x2
            else: return x1

    def display(self):
        print('seed: {}\nsolution: {}'.format(self.x_seed, self.x_solu))
        plt.figure(figsize=(6, 4))
        x = np.linspace(self.interval[0], self.interval[1], 300)
        y = self.func(x)
        plt.plot(x, y, 'g-', label='function')
        plt.plot(self.x_seed, self.func(self.x_seed), 'bo', label='seed')
        plt.plot(self.x_solu, self.func(self.x_solu), 'r*', label='solution')
        plt.title('solution = {}'.format(self.x_solu))
        plt.xlabel('x')
        plt.ylabel('y')
        plt.legend()
        plt.savefig('SA.png', dpi=500)
        plt.show()
        plt.close()


if __name__ == '__main__':
    SA([-5, 5], 'max')

achieve results

insert image description here

Modeling data

Data Sharing: The strongest modeling data
insert image description here
insert image description here

Guess you like

Origin blog.csdn.net/dc_sinor/article/details/132478908