Original | https://mp.weixin.qq.com/s/TDj3aCEHjaKHATZ7uviQMA
Rectangular matrices with positive definite matrix
Before we have been discussing the square, but a large number of practical problems applied to a rectangular matrix, such as used in the least squares in A T A .
If A is a rectangular matrix of m × n, then A T A is a symmetric matrix, the square of course, we are interested in A T A positive qualitative. For A T A , our feature vectors and its determinant ignorance need to X T ( A T A ) X to determine their qualitative n> 0:
If and only if Ax = 0, the above equation is equal to zero, so only need to look at when Ax = 0.
We discussed in the null space matrix, for one of the m × n rectangular matrix, if the matrix is a full column rank is, m> n, then the null space of the matrix only the zero vector. Thus, when A is a matrix of full column rank, the only if X = 0 when Ax of = 0, this time for any nonzero vector, must have X T ( A T A ) X > 0, A is positive definite.
Similarity matrix
A and B are an n × n matrix, if there is invertible matrix M , so that B = M -1 AM , called A and B mutually similar matrix, denoted as A ~ B .
Similarity matrix eigenvalues
In fact, we would have seen a similar matrix. If A has n linearly independent eigenvectors, then A can be diagonalized as A = SΛS -1 , corresponding to S -1 the AS = Lambda , A , and eigenvalues matrix Lambda mutual similarity matrix, where M = S , it is a feature vector matrix. Indeed A similarity matrix has a lot, we can use an arbitrary invertible matrix M instead of S , thereby to obtain other similarity matrix, Lambda is similar to many of the most simple of a matrix.
Summons a matrix:
Λ and A mutually similar matrix. If we take another set of invertible matrices can be obtained A further similarity matrix:
Observation B will find that it is the trace of 4 (feature vector sum), 3 determinant (the product of the feature vector), which implies that we B feature vectors of A same. In fact this is the similarity matrix characteristics: has the same similarity matrix eigenvalues. Characterized in that virtually all of the 1 and 3 are second order matrix A similarity matrix.
Why similarity matrix exhibit the same characteristic value it? Now set A and B mutually similar matrix, B = M -1 AM , according to the characteristic equation:
Now there is a new characteristic equation, B eigenvectors are M -1 X , characteristic value λ, and A feature values match. Of course, do not expect the same feature vectors, if the feature vectors are the same, the same becomes a matrix exactly equal.
Similar Matrices
For B = M -1 AM
Set A , B , and C is an arbitrary matrix of the same order, there are:
(1) anti-wear properties: A~A
(2) symmetry: if A ~ B , then B ~ A
(3) is transitive: if A ~ B , B ~ C , then A ~ C
(4)若A~B,则二者的特征值相同、行列式相同、秩相同、迹相同。
(5)若A~B,且A可逆,则B也可逆,A-1~B-1。
特征值相等的情况
当A的所有特征值互不相同时,A必然存在n个线性无关的特征向量,此时A能够对角化;如果存在完全相等的特征值,是否能够对角化就不好说了,需要另行判断,我们对这类矩阵的相似矩阵同样感兴趣。
上面的对角矩阵有两个相同的特征值:λ1=λ2=4,如果A有相似矩阵,我们看看这个相似矩阵是什么:
此时A的相似矩阵是A本身,类似A这种特征值重复的对角矩阵,它们只和自己相似。
另一种特征值相同的矩阵则可能有很多相似矩阵,两个特征值都是4的这类矩阵中最简洁的是:
这个矩阵无法对角化,如果它能对角化,那么:
这显然是不成立的。类似A的矩阵虽然有完全相同的特征向量,但无法对角化,比如把右上角的元素1改成其他值。其中A是这类矩阵中最简单的一个,称为诺尔当标准型。
诺尔当标准型
诺尔当指出,对于特征值完全相同的方阵A,就算不能对角化,也一定能够通过变换得到与对角矩阵很接近的诺尔当标准型。具体来说,对于方阵A,一定有同样规模的可逆矩阵P,使得P-1AP=J,J是诺尔当标准型。
诺尔当标准型到底是个啥?举个例子:
上面的矩阵就是诺尔当标准型,其中空白区域的元素全是0,每一个红色方块是一个诺尔当块。每个诺尔当块都要满足两个性质:主对角线元素完全相同(特征值完全相同),主对角线上方的次对角线元素全为1(如果有次对角线的话)。上面的矩阵是5个诺尔当块构成的,其中[4]比较特别,它只有主对角线,没有次对角线,是大小为1的诺尔当块。
若尔当标准型是由若干个若尔当块按对角排列组成的准对角矩阵。
有时候,诺尔当标准型不是那么容易辨别。来看几个诺尔当标准型:
J1和J2比较容易:
J3不是诺尔当标准型,它的次对角线是1,主对角线元素不全相等。
J4也是诺尔当标准型,包含了三个大小为1的诺尔当块。
诺尔当标准型与相似矩阵
诺尔当告诉我们,如果一类矩阵可以化为相同的标准诺尔当型J,则这些矩阵全部是相似矩阵,都可以用P-1JP来表示。
A是诺尔当标准型,把右上角的元素1改成其他值,同样可以转换成A的形式,它们都是相似矩阵。
下面的一组也是相似矩阵:
B的第一个块可以很容易地通过矩阵变换转换成诺尔当块。
如果两个同阶矩阵有相同数量的诺尔当块,但尺寸不同,则这两个矩阵不是相似矩阵:
C由一个大小为3和1的诺尔当块构成,D由两个大小为2的诺尔当块构成,虽然诺尔当块的数量相同,但尺寸不同,它们并不是相似矩阵。
出处:微信公众号 "我是8位的"
本文以学习、研究和分享为主,如需转载,请联系本人,标明作者和出处,非商业用途!
扫描二维码关注作者公众号“我是8位的”