25.6 Introduction to 10 optimization methods in matlab - simulated annealing algorithm (matlab program)

1. Brief description

      

I believe that friends who do not have relevant physical knowledge background will be confused when they see the word "annealing"... The annealing process of a solid refers to the process of heating a solid to a high enough temperature and then cooling it slowly. During the heating process, the internal particles that were originally arranged in an orderly manner began to move in disorder, and the internal energy of the solid continued to increase at this time; while in the cooling process, the arrangement of the particles gradually became orderly.

The object system always tends to maintain the state with the lowest energy. In theory, in the cooling process of the annealing process, if the cooling process is slow enough to discretize each moment, then the solid at each temperature may reach The minimum internal energy state within the allowable range, and how to find the temperature and state is the core problem of the simulated annealing algorithm.

Simulated annealing algorithm, genetic algorithm, particle swarm algorithm and other intelligent algorithms are used as a general probability algorithm. For relevant knowledge, please refer to my previous study notes:

Kezard: Genetic Algorithm 1 (GA)---Basic Concepts and Algorithm Process 68 Agreed · 1 Comment ArticleEdit

Kezard: Particle Swarm Optimization (PSO) Algorithm Concept and Code Implementation 159 Agreed · 14 Comment ArticlesEdit

This type of intelligent algorithm can be used to find the global optimal solution within a larger search space. Every element in the search space may be the global optimal solution of the problem, so how to establish the search method and how to judge the internal energy state of solid particles at a certain temperature become the core work to distinguish other intelligent algorithms. Generally, the temperature T can be used as the search space; the objective function value f in the actual situation can be regarded as the internal energy E; and the state of the solid at a certain temperature can be regarded as a solution x of the problem, that is, E=f(x) ;Under certain rules, the algorithm controls the decrease of temperature T, so that the internal energy of the solid is also close to the optimal value (lowest internal energy) , until the global optimal solution is found, just like the solid annealing process.

How to judge the decrease of internal energy?

Metropolis criterion: from the current state i to the next state j, if the internal energy of the new state is less than the internal energy of state i, then accept j as the new state, and its corresponding internal energy is the current optimal solution; otherwise, take probability] accept the new solution, where k is the Boltzmann constant and T is the current temperature. In the subsequent judgment process, the Monte Carlo experiment simulation of the computer can be added to generate the probability (random number).

The algorithm is designed to find the minimum internal energy, which can correspond to the global minimum solution in the actual situation. To find the global maximum, just add a "minus sign" before the objective function.

Basic steps of the algorithm:

  1. Determine the initial temperature 0 of the annealing process, randomly generate the particle state and a solution 0 at this temperature, and calculate the objective function value f, which is the internal energy 0 in this state.
  2. Go to the next cooling temperature.
  3. Add disturbance to the current solution, and generate a new solution , calculate the corresponding internal energy, judge whether to accept the new solution according to the Metropolis criterion, and update the local optimal solution and the global optimal solution.
  4. Under different cooling temperatures, repeat the perturbation of x for m times, that is, generate m new solutions, and perform 2~3.
  5. Determine whether T has reached the minimum temperature limit, generally set to 0.01∼5, if yes, terminate the algorithm; otherwise, continue to execute 2~4.

The algorithm is actually controlled by two layers of loops , the outer loop controls the temperature of the cooling process; the inner loop controls the disturbance to the solution x. The initial temperature 0 of the algorithm is higher, so that the initial solution with increased internal energy may also be accepted, because it can jump out of the trap of the local minimum, and then by slowly lowering the temperature, the solution is continuously added to the disturbance, making the algorithm finally possible Converge to the global optimal solution.

In the previous step 5, it was mentioned that the condition for the termination of the outer loop is whether the minimum temperature limit has been reached. Of course, we can also introduce some conditions for terminating the cooling process in advance, such as the maximum number of searches and the maximum accuracy setting.

Preferences

1. The initial value of the control parameter T is 0

Random search algorithms (such as genetic algorithm, particle swarm optimization algorithm) for solving global optimization problems generally adopt the strategy of combining global rough search and local fine search. Rough search can make the search process jump out of the trap of the local optimal solution, while local fine search can find the global optimal solution of the problem. Generally speaking, a sufficiently large initial value of 0 can traverse all feasible solutions, and a too small initial value It is often difficult for the algorithm to go beyond the local optimal solution, so that the global optimal solution cannot be found. Therefore, when selecting, the initial value should be selected appropriately and coordinated with other parameters. (The initial value selected in my program is 1000)

2. Cooling process

Cooling parameter, its value is 0.5∼0.99, this parameter determines the cooling process, the larger the value selected, the slower the cooling process, resulting in an increase in the number of iterations, of course it is more likely to find the global optimal solution, if you have When thermal equilibrium is reached, only a small amount of state change is required to achieve quasi-equilibrium, thereby reducing the length of the inner loop (Markov chain length)

3. The length of the Markov chain (the number of perturbation transformations of the solution)

The selection principle is: under the premise that the cooling process function of the control parameter T has been selected, from experience, the length is generally 100°, and n is the scale of the problem.

Simulated annealing algorithm solves the matlab solution of the TSP problem

  1. TSP problem: TSP (Traveling salesman problem) is the traveling salesman problem. The traveling salesman wants to make a tour in N cities, visit each city exactly once, and finally return to the starting city. And to minimize the total consumption of this tour (total distance or total cost, etc.), how to find this route?
  • The solution space S of TSP is to traverse all the circuits of each city exactly once (back to the starting point), then a feasible solution is an arrangement of all cities, then the solution space can be expressed as:

It is the number of each city, and each arrangement is an offline, so the initial solution is expressed as a random sequence of {1,2,...,N}.

2. Objective function

The cost of traversing all cities is the smallest, and here we characterize the cost as the smallest total path, which is the shortest path (the solution to the TSP problem is an arrangement of city labels)

3. Generation of new solutions

  • Two-point exchange method: exchange , the access order of
  • Three transformations: insert the path between the optional sequence numbers into � and then access it.

4. Metropolis acceptance criteria

The acceptance probability is defined by the objective function difference (the difference in internal energy) between the new solution and the current solution:

2. Code

Main program:

%% Use the simulated annealing method to calculate the minimum value of the function
clear all
f = inline('5*sin(x(1)*x(2))+x(1)^2+x(2)^2','x ');
l = [-5 -5]; % lower limit
u = [5 5]; % upper limit
x0 = [-4 0];
TolFun = 1e-9;
TolX=1e-5;
kmax = 800;
%%% %Use the Nelder-Meat method to find
[xo_nd,fo] = Opt_Nelder(f,x0,TolX,TolFun,kmax) 
 %%%% use matlab built-in function to verify
[xos,fos] = fminsearch(f,x0)
[xou,fou ] = fminunc(f,x0)
%%%% use simulated annealing method to find
 q =0.8;
[xo_sa,fo_sa] =Opt_Simu(f,x0,l,u,kmax,q,TolFun)

Subroutine:

function [xo,fo] =Opt_Nelder(f,x0,TolX,TolFun,MaxIter)
%Nelder-Mead method is used for multi-dimensional variable optimization problem, dimension >=2.
N = length(x0);
if N == 1 % One-dimensional case, use quadratic approximation to calculate
    [xo,fo] = Opt_Quadratic(f,x0,TolX,TolFun,MaxIter);
    return
end
S = eye(N);
for i = 1:N % Independent variable dimension When it is greater than 2, repeat the calculation of each sub-plane
    i1 = i + 1;
    if i1 > N
        i1 = 1;
    end
    abc = [x0; x0 + S(i,:); x0 + S(i1,:)]; % Each directional sub-plane
    fabc = [feval(f,abc(1,:)); feval(f,abc(2,:)); feval(f,abc(3,:))]; [x0,
    fo ] = Nelder0(f,abc,fabc,TolX,TolFun,MaxIter);
    if N < 3 % two-dimensional case does not need to repeat
        break;
    end 
end
xo = x0;

3. Running results

 

Guess you like

Origin blog.csdn.net/m0_57943157/article/details/131970054