Optimization algorithm------nonlinear least squares optimization

Nonlinear least squares optimization


Non-linear least squares optimization is also called unconstrained minimal sum-of-squares function problem, the form is as follows

Insert picture description here
If f (x) f(x)f ( x ) isxxThe linear function of x . At this time, the problem becomes a linear least squares problem. There is a special function to solve this problem.

1. G-N method

It is derived from the Newton's algorithm of unconstrained optimization, because the objective function in the nonlinear least squares optimization problem is relatively simple, and the specific form of the Jacobian matrix can be obtained. Substituting it into the iterative formula of Newton's method, G can be obtained. -N method. The expression of the objective function, there is
Insert picture description here
so
Insert picture description here
Newton unconstrained optimization algorithm, a gradient of the objective function is substituted, there is
Insert picture description here
due to the R (xk) R (x ^ k)R(xk )The amount of calculation is relatively large. Ignore the available G–N method.
Insert picture description here
It is a locally convergent algorithm, which has a great dependence on the initial point. It is possible to converge only when the initial point is close to the minimum point.

2. Modified G-N method

In order to overcome the shortcomings of the G-N method, there are two amendments.
1. After an approximate minimum point xkx^k has been obtainedxAfter k , calculate
Insert picture description here
whereα k \alpha^kaThe size of k is solved by one-dimensional unconstrained optimization methodmin S (xk) minS(x^k)minS(xk )Determine, because each step requires a one-dimensional search, the amount of calculation is very large.
2. Select a small positive numberδ k \delta_kdk, Such that S (xk + δ kvk) <S (xk) S(x^k+\delta_kv^k)<S(x^k)S(xk+dkvk)<S(xk ), this kind of scheme has a relatively small amount of calculation, and a simpler method can be used to determineδ k \delta^kdThe value of k .

3. L-M method

When matrix (J k) TJ k (J_k)^TJ_k(Jk)T JkWhen it is an ill-conditioned matrix, the correct solution may not be obtained by the G-N algorithm, even when (J k) TJ k (J_k)^TJ_k(Jk)T JkWhen it is irreversible, the G-N method cannot be calculated. The L-M algorithm uses the method of coefficient matrix damping to transform the matrix (J k) TJ k (J_k)^TJ_k(Jk)T JkThe nature of the algorithm enables the algorithm to proceed. There are two main steps in the L-M algorithm, one is to solve the equation to
Insert picture description here
find the increment of the independent variable, and the other is the damping coefficient μ \muμ 's adjustment algorithm.

Guess you like

Origin blog.csdn.net/woaiyyt/article/details/113789781