The linear algebra - matrix norm condition number and

1. matrix norm

How we measure the size of a matrix it? For a vector whose length is \ (|| \ || boldsymbol X \) . For a matrix whose norm is \ (|| A || \) . Sometimes we will use the norm of a vector instead of the length of this argument, but we just say for matrix norm. There are many ways to define the norm of the matrix, we look at all the norm requirements and select one.

All elements Frobenius matrix squared \ (| a_ {ij} | ^ 2 \) and then adding, then \ (|| A || _F \) is its square root. It's like the Matrix is seen as a very long there \ (n ^ 2 \) vector elements, which sometimes can be useful, but here we do not choose it.

Vector norm satisfies the triangle inequality, i.e. $ || \ boldsymbol x + \ boldsymbol y || $ || no greater than $ \ boldsymbol X || + || \ || Y boldsymbol $, \ (2 \ boldsymbol X \) or \ (-2 \ boldsymbol x \) length becomes twice. The same rule applies to the norm of the matrix:

The second requirement for the matrix norm is novel, because the matrix can be multiplied. Norm \ (|| A || \) controls the \ (\ boldsymbol x \) to \ (A \ boldsymbol x \) , and the \ (A \) to \ (B \) growth.

Based on this, we can define the norm of the matrix:

Identity matrix norm is 1, for one orthogonal matrix, we have \ (|| Q \ boldsymbol = || X || \ || boldsymbol X \) , so that the norm is also an orthogonal matrix.

For positive definite symmetric matrices, \ (|| A || = \ lambda_ max} {(A) \) .

The matrices into \ (A = Q \ Q the Lambda ^ T \) , the right and left sides of the orthogonal matrix holding constant the length of the vector, therefore \ (|| A \ boldsymbol x || / || \ boldsymbol x || \ ) the maximum value is the maximum diagonal features. For a symmetric matrix, we can still get decomposed above, but in this case does not guarantee the feature value is a positive number, the norm of the matrix becomes the maximum absolute value of the characteristic.

For asymmetric matrix, its eigenvalues ​​can not measure the true size of the matrix, are larger than the norm can all eigenvalues.

For the above example, \ (\ boldsymbol X = (0,. 1) \) is a symmetric matrix \ (A ^ TA \) feature vector, the norm of the matrix is in fact \ (A ^ TA \) is the biggest feature value decision.

Norm of the matrix is \ (A ^ TA \) the square root of the largest eigenvalue, which is the maximum singular value matrix.

2. condition number

Some systems are very sensitive to errors, some are not so sensitive to sensitivity error, we use several conditions to measure.

The original equation \ (A \ boldsymbol X = \ boldsymbol B \) , the right side is assumed that the equation due to the measurement error is changed to \ (\ boldsymbol B + \ of Delta \ boldsymbol B \) , then our solution becomes \ (\ the X-+ boldsymbol \ Delta \ boldsymbol the X-\) , our goal is to estimate \ (\ Delta \ boldsymbol b \ ) is how influence \ (\ Delta \ boldsymbol x \ ) is.

\[A(\boldsymbol x+ \Delta \boldsymbol x)=\boldsymbol b+\Delta \boldsymbol b \to A \Delta \boldsymbol x=\Delta \boldsymbol b \to \Delta \boldsymbol x=A^{-1}\Delta \boldsymbol b\]

If \ (A ^ {- 1} \) very large, the matrix is close to singular case, \ (\ of Delta \ boldsymbol X \) will be enormous. \ (\ Delta \ boldsymbol x \ ) also becomes particularly large if \ (\ Delta \ boldsymbol b \ ) in the wrong direction, as it will be \ (A ^ {- 1} \) amplification. The maximum error of \ (|| \ of Delta \ boldsymbol = || X || A ^ {-. 1} || \ || Space \ of Delta \ boldsymbol || B \) .

However, such a problem arises when we change the \ (A \) , then the solution of the equation \ (\ boldsymbol x \) and \ (\ Delta \ boldsymbol x \ ) will change at the same time, the relative error \ (|| \ of Delta \ boldsymbol x || / || \ boldsymbol x || \) remained unchanged. In fact, the solution should be \ (\ boldsymbol x \) relative errors and (\ boldsymbol b \) \ error compared to the condition number \ (c = || A || \ space || A ^ {- 1} || \) measures the equation \ (a \ boldsymbol x = \ boldsymbol b \) sensitivity.

  • prove

\[\tag{1}A \boldsymbol x=\boldsymbol b \to ||\boldsymbol b|| \leqslant ||A|| \space ||\boldsymbol x||\]

\[\tag{2}\Delta \boldsymbol x=A^{-1}\Delta \boldsymbol b \to ||\Delta \boldsymbol x|| \leqslant ||A^{-1}|| \space ||\Delta \boldsymbol b||\]

(1) 式和 (2) 式相乘,可得,

\[\tag{3} ||\boldsymbol b|| \space ||\Delta \boldsymbol x|| \leqslant ||A|| \space ||A^{-1}|| \space ||\boldsymbol x|| \space ||\Delta \boldsymbol b||\]

上式两边同时除以 \(||\boldsymbol b|| \space ||\boldsymbol x||\) 可得,

\[\tag{4}\frac{ ||\Delta \boldsymbol x||}{||\boldsymbol x||} \leqslant ||A|| \space ||A^{-1}|| \space \frac{\space ||\Delta \boldsymbol b||}{||\boldsymbol b||}=c\frac{\space ||\Delta \boldsymbol b||}{||\boldsymbol b||}\]

同理可得,

此外,对于正定矩阵,条件数来自于它的特征值。

获取更多精彩,请关注「seniusen」!

Guess you like

Origin www.cnblogs.com/seniusen/p/11957372.html