Optimization algorithm ------ unconstrained multi-dimensional extremum


The general expression of the unconstrained multidimensional extreme value problem is

Insert picture description here
It generally requires the global best advantage, but most optimization algorithms cannot do it. However, each problem has a certain application background. There are not many local best advantages. Sometimes the local best advantage is the global best advantage. Most of the time, it can be based on Judging the availability of results from experience (Need MATLAB to program the following algorithm, you can trust me privately, I will program and reply immediately after seeing it)

1. Direct method

The direct method is an algorithm that does not need to calculate the derivative. The method they use is to search the descending direction of the function along the coordinate axis or search in a predetermined direction, so the essence is a search----trial---- In the iterative process of progress, its calculation speed is slower than that of the algorithm using the derivative, but the iteration is relatively simple and easy to program.

1.1 Pattern search method

The pattern search method is also called the Hooke-Jeeves method. The algorithm mainly consists of two moving processes: detection movement and pattern movement. The detection movement is the movement along the coordinate axis, and the mode movement is the movement along the line of two adjacent detection points. The two moving modes are alternated, and the purpose is to search along the best direction of the decrease of the function value.

1.2 Rosenbrock method

It is a rotation axis method. The basic idea is to construct n orthogonal directions at the current point, and then detect and move in each direction, find the direction where the function value drops the fastest, move a certain step, and then construct at a new point New n orthogonal directions, and so on.

1.3 Simplex search method

It approximates the minimum point by constructing a simplex. For each simplex, the highest point and the lowest point are determined, and then a new simplex is constructed through expansion, compression, and reflection. The purpose is to make the minimum point can be contained in the simplex. .

1.4 Powell method

In each stage of the search, Powell’s method first searches along n known directions in order to get the base point for the next search, and then searches along the line of two adjacent base points to obtain a new base point, and replaces the previous n with this direction. One of the directions.

2. Indirect method

2.1 Steepest descent method

Its search direction is the negative gradient direction of the objective function, from the negative gradient direction of the objective function until it reaches the lowest point of the objective function.

2.2 Conjugate gradient method

It is an algorithm that uses the gradient of the objective function to gradually generate the conjugate direction as the line search direction. Each search direction is in the conjugate direction of the objective function gradient, and the search step is determined by a one-dimensional extreme value algorithm.

2.3 Newton's method

Newton's method is based on the Taylor expansion of multivariate functions. The basic iterative formula is
Insert picture description here
due to some limitations of Newton's method, and then improved Newton's method and quasi-Newton's method have appeared.

2.4 Trust region method

It is a relatively complex but very effective optimization algorithm. It obtains feasible points by solving a quadratic program in a trust region. At each feasible point, the search direction is obtained by solving the following sub-problems under a given trust region radius, and then the trust region radius is corrected after the search direction is obtained.
Insert picture description here

2.5 Explicit steepest descent method

Basically useless.

Guess you like

Origin blog.csdn.net/woaiyyt/article/details/113781887