Traveling Salesman Problem Based on Simulated Annealing

Based on the simulated annealing method of traveling salesman problem Matlab, please follow the WeChat public account "communication classroom acquisition", and reply to the traveling salesman problem based on simulated annealing method

 


1. Introduction

Since Kirkpatrick, Little Gerat and Wakki “published their pioneering papers on the basis of their predecessors’ research on statistical mechanics”, the simulated annealing algorithm has been praised for solving many difficult combinatorial optimization problems. The "savior" has been applied to computer-aided design such as VLSI, image processing and computer vision, telecommunications, economics and many other engineering and scientific fields.

2. Introduction to Simulated Annealing Algorithm

1. The
simulated annealing algorithm of the solid annealing process is derived from the principle of solid annealing. The solid is heated to a sufficiently high level and then slowly cooled. When the temperature is heated, the internal particles of the solid become disordered as the temperature increases, and the internal energy increases , And the particles gradually become orderly when cooled slowly, reaching an equilibrium state at each temperature, and finally reaching the ground state at room temperature, at which time the internal energy is reduced to a minimum.

 

2. Principle of simulated annealing algorithm

The earliest idea of ​​simulated annealing algorithm (Simulated Annealing, SA) was proposed by N. Metropolis et al. in 1953. In 1983, S. Kirkpatrick and others successfully introduced the annealing idea into the field of combinatorial optimization. It is a stochastic optimization algorithm based on the Monte-Carlo iterative solution strategy. Its starting point is based on the similarity between the annealing process of solid matter in physics and general combinatorial optimization problems. The simulated annealing algorithm starts from a certain higher initial temperature, with the continuous decrease of temperature parameters, combined with the probability of sudden jump characteristics to randomly find the global optimal solution of the objective function in the solution space, that is, the local optimal solution can probabilistically jump out and merge Eventually tends to the global optimum. The simulated annealing algorithm is a general optimization algorithm. By giving the search process a time-varying and eventually zero probability jump, it can effectively avoid falling into a serial structure that is locally minimal and eventually tends to the global optimum. optimization.

Three, the purpose of the simulated annealing algorithm

The objective function of many practical optimization problems is non-convex, and there are many local optimal solutions. However, it is still a difficult problem to effectively find the global optimal solution of general non-convex objective functions. Especially as the size of the optimization problem increases, the number of local optimal solutions will increase rapidly.

The simulated annealing algorithm uses the similarity between the problem solving process and the annealing process of the molten object, and uses the random simulated object annealing process to complete the problem solving, that is, the value of the parameter is adjusted under the action of the control parameter (temperature) until the The selected parameter values ​​finally make the energy function reach the global minimum.

Fourth, the simulated annealing algorithm process

Algorithm steps:
Step1: Select the appropriate objective function f as the energy function E for the problem; determine the initial parameters: starting temperature T, ending temperature, cooling rate α(α∈[0,1]), single temperature iteration number k.
Step2: Set the initial number of iterations t=0, generate the initial state X0, and calculate its energy E0.
Step3: Use the current solution as the center to generate a new neighboring solution X1 from the state generation function, and calculate its energy E1.
Step4: Use the Metropolis acceptance rule to compare the energy of the two states and decide whether to accept X1. If accepted, set the current state to X1, if not, set the current state to X0;
Step5: Update the number of iterations to determine whether it has reached the set value Threshold k, if yes, then cool down T=T*α, and let t=0.
Step6: Judge whether the temperature reaches the end temperature, if it is, execute step 7 in sequence; if not, go to step 3 and repeat
Step7: the current solution is output as the optimal solution.
Algorithm flowchart:

Five, the advantages and disadvantages of the simulated annealing algorithm

1. Advantages
(1) Accept the deteriorating solution with a certain probability, introduce the natural mechanism of the annealing process of the physical system, and accept the test point that makes the objective function value "bad" with a certain probability, and does not force the latter state to be certain Better than the previous state.

(2) Hill-climbing algorithm is compared with simulated annealing algorithm.
Hill-climbing algorithm: Assuming that point C is the current solution, the hill-climbing algorithm will stop searching when it finds the local optimal solution of point A, because point A cannot be obtained no matter how small it moves in that direction Better solution.
Simulated annealing is actually a greedy algorithm, but its search process introduces random factors and accepts a solution that is worse than the current solution with a certain probability. Therefore, it may jump out of this local optimal solution and reach the global maximum. Excellent solution. After the simulated annealing algorithm finds the local optimal solution A, it will accept the movement of E with a certain probability. Perhaps after a few such moves that are not local optimal, point D will be reached, so the local maximum A will be jumped out.
dd88caeb643cb910163656c5c55b3a0e.pnguploading.4e448015.gifFailed to re-upload and cancel

(3) Introduce the algorithm control parameter T to
divide the optimization process into various stages, and determine the selection criteria of the random state in each stage. The acceptance function is given by the Metropolis algorithm with a simple mathematical model.
The two important steps of the simulated annealing algorithm are: one is to start from the previous iteration point under each control parameter T to generate a neighboring random state, and the acceptance criterion determined by T determines the choice of this new state; the other is to slowly reduce the control Parameter T, increase the receiving criterion until T->0, the state chain is stable at the optimal state of the optimization problem, and the reliability of the global optimal solution of the simulated annealing algorithm is improved.

(4) Search using the object function value.
Traditional search algorithms not only need to use the objective function value, but also often need some other auxiliary information such as the derivative value of the objective function to determine the search direction. When this information does not exist, the algorithm fails. The simulated annealing algorithm only uses the fitness function value transformed from the objective function to determine the further search direction and search range without other auxiliary information.

(5) Searching for complex regions.
Simulated annealing algorithm is best at searching for complex regions and finding regions with high expectations. It is not efficient in solving simple problems.

2. Disadvantages
(1) The solution time is too long: when there are many variables and the objective function is complex, in order to obtain a good approximate solution, the control parameter T needs to start from a larger value and execute more at each temperature value T. Sub-Metropolis algorithm, so iterative calculation speed is slow.

(2) The performance of the algorithm is related to the initial value and the parameters are sensitive: the initial value of the temperature T and the reduction step size are more difficult to determine. If the initial value of T is selected larger, the reduction step size is too small, although a better result can be obtained in the end However, the convergence speed of the algorithm is too slow; if the initial value of T is chosen to be small and the step size is reduced too large, it is likely that the global optimal solution will not be obtained.

(3) The optimal solution currently encountered is lost due to the execution of the probability acceptance link in the search process.

Six, improved simulated annealing algorithm

(1) Simulated annealing algorithm with memory The simulated annealing algorithm
can not only accept the solution that makes the objective function move in a good direction during the iteration, but also can accept the solution that makes the objective function worse within a certain limit, which makes the algorithm effective Jump out of the local minimal trap. However, for engineering problems with multiple extreme values, this algorithm can hardly guarantee that the optimal solution we finally get is the optimal solution that has been reached during the entire search process. In order to solve this problem, we can add a memory to the algorithm so that it can remember the best result ever achieved in the search process, which can improve the quality of the final solution for us in many cases.

(2) Simulated annealing algorithm with monotonic heating.
When the temperature T is sufficiently large, the corresponding acceptance probability approaches 1, and the algorithm searches globally; when the temperature T is sufficiently small, the corresponding acceptance probability approaches For O, if the search falls into the local optimum at this time, the time to jump out of the local optimum will be quite long. Since the time cost of jumping out of the local optimum is high because the probability of accepting the poor solution is too low (that is, the temperature is too low), then after the search enters the local optimum, the temperature is artificially increased to increase the probability of accepting the poor solution , It is relatively easy to search out of the local optimum. This is the source of the idea of ​​monotonous heating process.

(3) Parallel simulated annealing
has been given a simulated annealing mode, that is, a set of annealing parameter values ​​and their changing rules have been given, and N initial values ​​are independently selected in the region D according to a certain distribution , Calculate at the same time, then N results can be obtained, which can be regarded as independent and identically distributed random vectors.

(4) Increase the supplementary search process. That is, after the annealing process is over, with the searched optimal solution as the initial state, the simulated annealing process or local chemotaxis search is performed again.

(5) Choose a suitable initial state;

(6) Design appropriate algorithm termination criteria;

Seven, the application of simulated annealing algorithm

The TSP (Traveling Salesman) problem is one of the most representative optimal combination problems. Its application has gradually penetrated into various technical fields and our daily lives. It was originally proposed for transportation, such as aircraft routing, Send mail, express delivery service, design school bus route, etc. In fact, its application range has expanded to many other fields, such as VLSI chip design, circuit board layout, robot control, vehicle routing, logistics and distribution.


The Traveling Salesman Problem (TSP) is that the traveling salesman has to travel to several cities, and the cost between the cities is known. In order to save costs, start from the city where you are, and return after traversing all the cities, asking how to make the total cost of the journey the shortest ?

This problem belongs to the NPC problem. There are two main algorithms:
(1) Exhaustive method: The algorithm complexity is O(n!). When the number of cities is large, the running time of the algorithm is unacceptable, but the result is an accurate solution.
(2) Random search algorithm: hill climbing algorithm, simulated annealing algorithm, ant colony algorithm, etc.

 

 

Using MATLAB simulation results:

 

 

 

 

8. Question

1. In the process of using the simulated annealing algorithm to solve the TSP problem, a new solution needs to be generated in each iteration. What should be done in the simulation and on what basis?

Answer: In the above simulation, the new solution is generated by randomly exchanging the order of the two cities in the travel route. The number of exchanges decreases as the temperature increases, and the minimum is one. There are many methods for generating new solutions in TSP problems. Among them, there are three commonly used traditional methods, two-point transformation method, two-transformation method, and three-transformation method. The improved method is aimed at increasing the range of the new solution space. Among them, we use an improvement The two-point transformation method. In the TSP problem, the new solution generation method has a fundamental requirement that the search step is small enough, that is, all solutions can be traversed from any initial solution (because any solution may be the optimal solution), and the two-point transformation method can be used to traverse. All possible solutions, and the maximum number of exchanges from any solution to another is n×(n-1)/2. In addition, in this process, it is necessary to make the new solution space large enough to ensure that the local optimum can be jumped out. Because there are hopes that the new solution space is large (when T is large, that is, it is beneficial to jump out of the local optimum), the new solution space is small (when T is small, that is, the later iteration), the new solution generation method can traverse all solutions, so we The two-point conversion method is adopted in which the number of exchanges decreases with the decrease of temperature.

2. How to judge whether the iteration is completed from the graph of distance and iteration times?

Answer: In the simulated annealing algorithm, the iteration is generally stopped by judging the annealing temperature, number of iterations, or not accepting new solutions for a long time. From the graph of distance and number of iterations, if the distance between the previous and subsequent iterations is less than a certain degree, it can be roughly judged that the iteration is complete. When the number of iterations tends to infinity, the optimization result of the simulated annealing algorithm can be considered to be toward the global optimal solution with probability 1. However, when the number of iterations is limited, the obtained solution may be only a better solution, but not necessarily the global optimal solution.

Guess you like

Origin blog.csdn.net/Xiaoxll12/article/details/105177389