Matrix eigenvalue eigenvector understanding

eigenvalues ​​eigenvectors

Some of the data collected

After being confused for a long time, I finally figured it out. In fact, it is a data processing method that can simplify the data. A matrix multiplied by an eigenvector is a projection in its direction. This is similar to the fact that the vector dot product is also a projection.
By finding eigenvalues ​​and vectors, the matrix data is projected in an orthogonal space, and the size of the projection is the eigenvalue. This intuitively reflects the basic characteristics of the data.
The maximum eigenvalue does not mean the maximum value of the projection of the data in all directions, but is limited to a certain direction of the orthogonal space. As for why the obtained eigenvectors are orthogonal, it can be proved.
There is no other orthogonal space, general matrix, satisfies full rank, there is only one such space.
Will there be a better space to reflect the characteristics of the data? Generally speaking, an orthogonal space is good. It does not rule out that special applications require a non-orthogonal space, which may be better.

My own understanding:

The original source formula of eigenvalues ​​and eigenvectors
A α = λ α A\alpha=\lambda\alphaA α=l a

That is to say, the eigenvalues ​​and eigenvectors are for the transformation matrix A. The physical meaning is that after the transformation A is performed on the eigenvectors, the direction of the eigenvectors does not change, but the size is scaled. In two-dimensional space, the scaling factor is λ \lambdal .

To further understand, there is more than one eigenvector, and all these eigenvectors form a feature space. What kind of feature space does it look like?
A feature space S transformed by a transformation matrix A into . Applying transformation A to the vector can transform the vector into this feature space S.

How do we perceive and describe this feature space S?
Referring to the basic solution system , we use a set of linearly independent vectors to mark this vector space. All vectors in this space are linear combinations of this set of linearly independent vectors. This set of vectors can be called a base vector.
The more standard definition is here: the basis of a vector space is a special subset of it, and the elements of the basis are called basis vectors. Any element in a vector space can be uniquely expressed as a linear combination of basis vectors.
Taking a two-dimensional coordinate system as an example, any set of non-parallel vectors (excluding 0 vectors) can represent this space. Our most commonly used base vectors are ( 1 , 0 ) , ( 0 , 1 ) (1,0) ,(0,1)(1,0),(0,1)

Going back to the feature space S transformed by the transformation matrix A mentioned above, the eigenvectors of A are a set of vector bases of this feature space, and the eigenvectors corresponding to different eigenvalues ​​of A are linearly independent. The eigenvector group of A can be marked into the feature space S.

Replenish

Maybe it's not that this space has changed, what has changed is only the angle and benchmark we look at things.

Add a little understanding about orthogonal normalization and similar diagonalization

Why Orthonormalization?

Note that the object of orthogonal normalization here is the vector base of all vector spaces, not limited to the vector base formed by the eigenvectors of A.
Orthogonal: The inner product of two vectors is orthogonal = 0
Specification: The modulus of the vector is 1

The advantage of orthogonal normalization: It is convenient to find the projection, that is, to find the components on a certain vector base, such as in the two-dimensional coordinate system, to find the x-axis and y-axis components

Vector α 1 \alpha1α 1 inα 2 \alpha2Projection on α 2
( α 1 , α 2 ) ( α 1 , α 1 ) α 1 \frac{(\alpha1,\alpha2)}{(\alpha1,\alpha1)}\alpha1( a 1 ,a 1 )( a 1 ,a 2 ).a 1

If α 1 \alpha1α 1 is a unit vector, then( α 1 , α 1 ) = 1 (\alpha1,\alpha1)=1( a 1 ,a 1 )=1 , the projection is
( α 1 , α 2 ) ⋅ α 1 (\alpha1,\alpha2)\cdot\alpha1( a 1 ,a 2 )a 1

Vector α 2 \alpha2The inner product of α 2 and the unit vector (projection size) is multiplied by the unit vector (projection direction)

similar diagonalization

Why similar diagonalization

Why mention similar diagonalization? Because the process of diagonalization uses the set of eigenvectors again, I feel that there are too many theories in the process of linear algebra learning.

Similar diagonalization can simplify the expression form of the matrix. After the transformation matrix A is diagonalized, isn't the transformation much simpler?
insert image description here

It's pinching, so hey! But isn't this transformation the original transformation after diagonalization? Although mapped to the same feature space, the mapping relationship has obviously changed.

Here we need to deeply understand similar diagonalization, similar diagonalization, the original matrix and diagonal matrix must be similar, so what is similarity? What is the meaning and nature of similarity?

(Let’s dig a hole, a blog post is too bloated, open a new blog post similar to diagonalization, I will understand again)

Guess you like

Origin blog.csdn.net/m0_51312071/article/details/132597409