Introduction to Convex Optimization CH1

CH1 Introduction

convex optimization

minimize f0(x)

subject to fi(x)bi,i=1,2,...m

f0(x):Rn>R , the objective function, fi(x):Rn>R , the constraint function

Linear programming: any x , yRn, α , βR , have fi( α x + βand) = αfi( x ) + βfi( and)

Convex Optimization: Any x , yRn, α , βR , and satisfy α + β= 1 , α 0 , β0, , the following inequality holds fi( α x + βand)αfi( x ) + βfi( and)

Least squares, linear programming are special convex optimization problems. There are many efficient algorithms for solving convex optimization problems, and in some cases it can be shown that interior point methods can solve these convex optimization problems with given accuracy in polynomial time.

One point worth mentioning is that by adding a regular term to the least squares cost function, when x) Adding a penalty when it is too large can make the resulting solution more realistic than just optimizing the first term. In statistical estimation, when x When the distribution of is known in advance, a regularization method can be used. Least squares of quadratic regularization terms: ki=1(aTixbi)2+ ρni=1x2i

Conceptually, a problem can be solved quickly if it can be formulated as a convex optimization problem. The key is to judge whether the problem is convex optimization and to transform the problem into convex optimization.

nonlinear optimization

local optimization

Instead of searching for the optimal feasible solution that minimizes the value of the objective function, a local optimal solution (satisfactory solution) is sought. Only the objective function and constraint function are required to be differentiable, and local optimization can be quickly solved. Disadvantages are:
- Unable to estimate how far the local optimum is from the global optimum
- Sensitive to initial values ​​- Sensitive to
parameter values

Local optimization needs to select a suitable algorithm, adjust the parameters of the algorithm, select a good enough initial point, or provide a method for selecting a better initial point. Local optimization methods are an effective technique and not just a technique. Unlike convex optimization, modeling a real problem as a nonlinear optimization problem is fairly straightforward, and the trick is mostly in the solution.

Global optimization

Global optimization can find the absolute worst parameter value if, often used for worst-case analysis problems and verification problems in high-value systems and safety-first systems. If the system performance is still acceptable in the worst case, then the system is safe and reliable.

Application of convex optimization in non-convex optimization (omitted)

Using Convex Optimization to Find Initial Values ​​in Local Optimization

Heuristics in Nonconvex Optimization

Bounds for Global Optimization

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325195066&siteId=291194637