Optimization algorithm ------ constrained optimization


Constrained optimization is generally an optimization algorithm with equality constraints and inequality constraints.

1. Rosen gradient projection method

It is an optimization algorithm for solving linear inequality constraints. The basic idea is to start from a feasible point and search for a new feasible point along the direction where the value of the objective function decreases. After the iteration continues, the feasible point is the point that satisfies the constraint condition.

2. Penalty function method

It is able to deal with general constrained optimization problems, and the basic idea is to solve constrained optimization problems into unconstrained problems. The penalty function is a function obtained by a certain combination of the objective function and the constraint function. For equality constraint optimization problems, the following penalty function can be defined.
Insert picture description here
For inequality constraints, the following penalty function can be defined
Insert picture description here
for both equality constraints and inequality constraints. Problem, the penalty function can be a combination of the two. Of course, there are other ways to choose the penalty function, but the construction idea is the same, that is, the penalty function value is equal to the original objective function value at the feasible point and a large number at the infeasible point.

2.1 External point penalty function method

It is through a series of penalty factors ci c_iciFind the minimum point of the penalty function to approximate the best point of the original constraint problem. The reason why it is called the outer point penalty function method is because it gradually moves closer to the constraint boundary from the outside of the feasible region.

2.2 Interior point penalty function method

All iterations are carried out in the feasible region, and the points obtained in each iteration are all feasible points.

2.3 Mixed penalty function method

It combines the advantages of the outer point penalty function (the initial point can be taken arbitrarily) and the advantage of the inner point penalty function (the approximate optimal solution can be obtained), and can be used to solve optimization problems that contain both equality and inequality constraints.

2.4 Multiplier method

Both the outer point penalty function and the inner point penalty function require the penalty factor to be infinite in order to obtain the optimal solution of the objective function. However, if the penalty factor is too large, it will cause calculation difficulties. In order to overcome this defect, the multiplier method is produced. Elaborate.

3. Coordinate rotation method

It is based on the pattern search method in the unconstrained optimization algorithm. Each search is restricted by the constraint function and the search is limited to the feasible region.

4. Compound shape method

It comes from the simplex method of the unconstrained optimization algorithm. The optimal solution is obtained by constructing a complex shape. The new complex shape is obtained by replacing the bad points in the old complex shape (the point with the largest or second largest objective function value). The replacement method is still the basic methods of reflection, compression, and expansion in the simplex method.

Guess you like

Origin blog.csdn.net/woaiyyt/article/details/113785709