Table of contents
3. Linear correlation and linear independence
7. Eigenvalues and eigenvectors of matrices
9. Intersection and sum of subspaces
10. Rank and dimension of matrix
13.Characteristic polynomials and minimum polynomials
17. Invariant factors and elementary factors
3. Comparison between real inner product space and complex inner product space
3. Normal matrix and Schur decomposition
5. Singular value decomposition
1. Find the generalized inverse of full rank decomposition
2. Find the generalized inverse using singular value decomposition
3. Spectral decomposition to find generalized inverse
4. The system of equations is compatible
6. Whether the generalized inverse judgment equation has a solution
3. Solutions to non-homogeneous linear equations
6. Matrix sequences and matrix functions
1. Undetermined coefficient method
3. Matrix differential equations
7. Matrix eigenvalue estimation
Reference blog
1. Linear space
1. Sets and number fields
A set refers to a bunch of things put together. The number field indicates that the numbers inside are closed to addition, subtraction, multiplication and division. It is a special case of sets and is infinite.
2. Linear space
Refers to a space in which any vector, whether multiplied by a constant or added, subtracted, multiplied or divided with other vectors, the result is still in this space, then this space is called a linear space. Similar to the number field concept.
3. Linear correlation and linear independence
If there is a bunch of vectors such that not all of them are 0, it is called linear correlation. Otherwise, it is linearly independent.
Linear correlation means that these vectors are perpendicular to each other. When each vector is not 0, the vector sum will never be 0. Linear independence means that at least one of these vectors is not perpendicular to the other vectors.
4. Bases and dimensions
A linear space can represent all vectors by a minimum of m vector combinations, then this minimum number m is the dimension of the linear space, and the m-dimensional linear space V is recorded as .
The meaning of vector coordinates: the components of a certain vector in linear space under each coordinate base are called the coordinates of the vector
5. Linear subspace
is a non-empty subset of V, which is a linear space on the number field space K, if it satisfies:
- If x,y , then x+y
- If x , k K, then kx
If it is equal to V or is the empty set, it is called a trivial subspace, otherwise it is called a non-trivial subspace.
6. Matrix range and kernel
Value range (column space): All vectors that have undergone matrix transformation, the transformed vectors fall in a space, and this space is called the value range.
Let R(A) represent the value range of matrix A. There is any vector x such that Ax=y vector. The set of y vectors is called the value range of matrix A.
The matrix value range satisfies:
Kernel : After matrix transformation of any vector, the result is 0. The space composed of these vectors is the kernel space of A, or the zero space.
The dimension of the kernel space is called the zero degree of A, denoted n(A)
Dimension : The space where the kernel is located is called V. The dimension of V = the kernel space dimension of its transformation matrix + the dimension of the value domain.
Matrix 4 spaces : Any matrix can be divided into two spaces: row space + zero space or column space + left zero space
dim(nullspace) = number of columns - r = n - r
dim(left nullspace) = number of rows - r = m - r
7. Eigenvalues and eigenvectors of matrices
Vector transformation : Matrix A can be understood as the transformation of vector x, that is, after A transforms x, x is only scaled in length.
It can also be understood this way, there is a matrix A, how to find the vector x transformed by it, which only changes in length and unchanged direction. These vectors are the eigenvectors of matrix A, corresponding to the eigenvalues .
All eigenvectors of the matrix form the basis of the matrix, which is the coordinate system after matrix operations. Matrix operations can be understood as changes to vectors or transformations of coordinate systems (that is, basis transformations).
Coordinate system transformation : vector x, multiplied by matrix A, becomes vector b
It can be understood that the vector remains unchanged, but the coordinate system is transformed.
x is the description under the A coordinate system, b is the description under the I coordinate system, that is, I can be regarded as omitted
For example: in the normal two-dimensional coordinate system, there are two ways to change the point (2,3) into the point (1,1). One is to directly move the point to the latter, that is, motion; the other is to move the point to the latter point. The horizontal axis of the coordinate system is shortened by 1/2 and the vertical axis is shortened by 1/3.
Matrix understanding : When there is a linear transformation, multiple matrices can be used to describe the transformation, but these matrices are not the transformation itself, but a way of describing the transformation, and these matrices that can describe the transformation are all similar. , have the same eigenvectors and eigenvalues.
8. Determinant
Matrix A can be understood as a linear transformation from one point to another point in n-dimensional space. If the object of transformation is also a matrix T, it can be understood as the transformation of a cube T from n-dimensional space to n-dimensional space. After transformation, it is M. The volume after transformation divided by the volume before change is the transformation rate, that is The value of the determinant.
Inverse of matrix : Matrix A is understood as the matrix changing from T to M, and matrix is understood as the matrix changing from M back to T, then:
If the matrix is invertible, it must have a determinant, which is the transformation factor. If the matrix has no determinant, it means that the matrix must be irreversible.
9. Intersection and sum of subspaces
Intersection is intersection and sum is union.
Direct sum : The only representation is a vector sum in and a vector sum in , denoted as
It can be understood that a vector in a high-dimensional space is projected onto mutually orthogonal and independent dimensions. After processing them, the direct sum is the solution required by the vector, similar to the decomposition of force.
10. Rank and dimension of matrix
Rank: The rank of the matrix = the dimension of the column vector = the dimension of the row vector, which is the maximum dimension of the column vector, recorded as rank(A)
Dimension: Linear space consists of several vectors, that is, the number of column vectors, recorded as dim(A)
11.Matrix type
real orthogonal matrix | |
unitary matrix |
|
Symmetric matrix ( for symmetric matrix) | |
Hermitian matrix | |
Normal matrix (must be similarly diagonalized) | |
singular matrix | |
idempotent matrix | |
simple matrix | diagonalizable matrix |
11.Hermite standard type
Matrix , and the first r rows of H are non-zero rows and contain an identity matrix , and the last mr rows are all 0. example:
12.Jordan standard type
Upper triangle block, example:
12. Deficiency of matrix
If the matrix has full rank, then the rank of the matrix is
If the matrix is not of full rank, the rank is , and the loss is
13.Characteristic polynomials and minimum polynomials
(1) Characteristic polynomial
The n-order matrix A is the root (zero point) of its characteristic polynomial
That is to say, it satisfies:
(2) Minimum polynomial
The characteristic polynomial with the smallest degree such that m(A)=0 has a leading coefficient of 1. The general method is to refer to it , lower its highest power by 1, and then try until m(A)=0 is satisfied.
14. Matrix Contract
Conditions of contract between A and B:
There exists an invertible matrix C such that
15. Vector Orthogonalization
If there is a set of non-coplanar vectors, how to orthogonalize them?
The idea is to first orthogonalize two of the vectors, and then orthogonal the third component to the two orthogonal components.
16.Orthogonal complement
If vectors y and vectors are both orthogonal, then the linear combination of y and is also orthogonal.
w is a subspace of the Euclidean space. If there is a vector y that is orthogonal to W, then y is orthogonal to each basis vector of W.
Orthogonal complement:
y is called the orthogonal complement of W space, denoted as
Dimensions of the orthogonal complement space:
17. Invariant factors and elementary factors
Elementary factors, invariant factors
Invariant factors:
The diagonal elements of Smith's standard type are invariant factors, arranged from small to large. respectively recorded as
The determinant factor is the greatest common factor of the kth order subexpression and satisfies
Elementary factors:
Remove the invariant factor constant term and split the remaining factors
example:
Invariant factors:
Elementary factors:
18.Inverse of matrix
Inverse method of matrix 1 :
Inverse method of matrix 2 :
Write the matrices A and I side by side, and then perform elementary row transformation at the same time to transform A into I. At this time, the position of I is. The principle is that the elementary row transformation is equivalent to left-multiplying A by a matrix T, so that , that is, Is equal to the matrix after I transformation.
How to find eigenvalues and eigenvectors : for any matrix A, find
Perform row transformations and column changes so that there is only one number left in a certain row or column, and the rest are all 0, and then the determinant is equal to the value of the point multiplied by the algebraic cofactor.
Find the adjoint matrix of matrix A :
Find the algebraic cofactor at the corresponding position. What needs to be noted is that the original algebraic cofactor needs to be filled in at another position . Special attention needs to be paid here.
example:
2. Two spaces
1.European style space
real inner product space
The result of the inner product is a numerical value
2. Unitary space
complex inner product space
Represents conjugation
3. Comparison between real inner product space and complex inner product space
real inner product space (European space) |
Complex inner product space (unitary space) | |
number | real numbers | plural |
orthogonal transformation (Orthogonal matrix) |
Unitary transformation (unitary matrix) |
|
Symmetry transformation (real symmetry matrix) |
Hermitian transformation (unitary symmetry transformation) Hermitian matrix |
|
matrix eigenvalues | real numbers | real numbers |
matrix eigenvector | Orthogonal | Orthogonal |
3. Matrix decomposition
Purpose : Decompose the matrix into some small matrices to make it easier to calculate or analyze the characteristics of the matrix
1.Eigenvalue decomposition
Where the invertible matrix P is equal to the orthogonal normalized combination of eigenvectors, the matrix
tips:
The eigenvectors that make up P here do not need Schmidt orthogonalization. Schmidt orthogonalization is just for convenience . For example, the following Schur decomposition uses Schmidt orthogonalization, because after Schmidt orthogonal normalization, P becomes an orthogonal matrix, that is .
2.QR decomposition
Definition: A real (complex) non-singular matrix A can be decomposed into a normal matrix Q and a positive linear triangular matrix R
Singular matrix: a matrix with determinant = 0, that is, an irreversible matrix
The column vector of A is
(1) Find the orthogonal unitization matrix of matrix A, that is, Q
Orthogonalize _
in
in
unitization _
(2) Find the product of the orthogonal vector modulus value and C, that is, R
example:
unitization _
Givens matrix and Givens transformation:
The Givens matrix is an elementary rotation matrix, and the transformation determined by the Givens matrix is called the Givens transformation, which is an elementary rotation transformation.
nature:
(1) Givens matrix is an orthogonal matrix
(2) Any invertible matrix can be transformed into an upper triangular matrix by left-multiplying a limited number of Givens matrices.
Householder matrix:
The Householder matrix is also called the elementary reflection matrix, which is equivalent to mirroring the vector or matrix, and the modulus value remains unchanged.
nature:
(1) Belongs to a symmetric orthogonal matrix
(2)
(3) The Givens matrix is the product of two Householder matrices
(4) Any reversible matrix can be transformed into an upper triangular matrix by left-multiplying a limited number of Householder matrices
3. Normal matrix and Schur decomposition
Schur’s Lemma:
And the eigenvalues of A are all real numbers, then there is an orthogonal matrix Q such that
Any real square matrix A is orthogonally similar to an upper triangular matrix, and its main diagonal elements are the eigenvalues of matrix A.
Schur decomposition process :
The principle is that it is known that P is a full-rank matrix. In fact, when P is a unitary matrix, here , therefore, finding the unitary matrix P is equivalent to finding the full-rank matrix P before.
example:
Find the unitary matrix Q such that it is a diagonal matrix
(1) Find the eigenvector corresponding to the eigenvalue
a. Find eigenvalues
at that time
at that time
at that time
(2) Orthogonally normalize the feature vectors
(3)
4. Full rank decomposition
Definition : If , if there exists a sum such that , then it is called a full-rank decomposition of a matrix. Full rank decomposition is not unique
example:
G is the simplest form
5. Singular value decomposition
On the basis of eigenvalue decomposition, singular value decomposition is derived. The basic principle is the same, except that eigenvalue decomposition corresponds to an order matrix, and singular value decomposition corresponds to an order matrix, not a square matrix.
For the order matrix A, which is an order symmetric matrix (symmetric matrix ), and is an order symmetric matrix, then we have
P is the left singular vector of , Q is the right singular vector of
Both P and T are unitary matrices, and unitary matrices are also orthogonal matrices. The diagonalization of the orthogonal matrix can be written as
but
Taking the square root of the eigenvalues of A is called the singular value of A
example:
(1)
Since it is not of full rank, only the square matrix in the upper left corner is calculated. The eigenvalues are ,
The corresponding feature vector is:
After unitization:
(2)
The eigenvalues are:
The eigenvector is:
After unitization:
.
(3)
Solution ideas:
The solution process of singular value decomposition (SVD) is actually extremely simple, and the following two formulas can be derived:
So the singular value decomposition is actually very simple. Treat and as a square matrix for shur decomposition, that is, find the standard orthonormal basis corresponding to the eigenvalue. If A is an m*n order matrix, it corresponds to U and the solution to V, and they eigenvalues are the same.
6. Spectral decomposition
Definition: A simple matrix (diagonalizable matrix) A can be decomposed into the sum of the eigenvalues and the idempotent matrix product, that is
is called the spectral value of A, that is, the eigenvalue of A, and is called the spectral matrix of A.
Therefore, spectral decomposition can actually be regarded as the eigenvalue decomposition of a matrix, which decomposes the matrix into the sum of the products of a single eigenvalue and a single matrix.
Eigenvalue decomposition, recorded as
Solution ideas :
(1) Find eigenvalues and eigenvectors
(2) Normalize the feature vector, that is, find the sum of P
(3) Decompose P and find
example:
The eigenvalues are respectively ,
4. Generalized inverse matrix
Corresponds to the following one-two-two formulas.
Assume a matrix , if the matrix satisfies one or all of the following
Then X is called the generalized inverse matrix of A. If all four are satisfied, X is called the Penrose inverse of A, recorded as
1. Find the generalized inverse of full rank decomposition
If , then
Therefore, it is necessary to first find the full rank decomposition
2. Find the generalized inverse using singular value decomposition
If , then
3. Spectral decomposition to find generalized inverse
4. The system of equations is compatible
Definition : There exists a system of non-homogeneous linear equations
If , then this system of linear equations is said to be consistent
5.{1}-reverse
6. Whether the generalized inverse judgment equation has a solution
Determine whether there is a solution
If , then there is a solution
5. Vector and matrix norms
, the norm of vector x represents the size of the vector, and the norm of matrix A represents the rate of change when changing from vector x to vector b, that is, the size of the scaling amount. The norm is a size measurement tool.
1. Vector norm
category | official | meaning |
1-norm | sum of absolute values of x | |
2-norm | xEuclidean distance | |
-Norm | The largest absolute value of x | |
p-norm | The pth power and the root power of x | |
E-norm | The absolute value differs from the 2-norm |
2.Matrix norm
category | official | meaning |
1-norm | The sum of the absolute values of the column elements, taking the maximum | |
2-norm | Maximum eigenvalue square root | |
-Norm |
Corresponds to 1-norm, the maximum absolute value sum of row elements | |
F-norm | Sum of squares of all elements |
3. Solutions to non-homogeneous linear equations
because , so
If row A has full rank, then:
If column A has full rank, then:
6. Matrix sequences and matrix functions
1.Matrix function
Assume that the one-variable function f(z) can be expanded into a power series of z
silu represents the convergence radius of the power series. When the spectral radius of the n-order matrix A is , the sum of the converged matrix power series is called the matrix function, recorded as
z | A |
2. Solving matrix functions
1. Undetermined coefficient method
Solution ideas:
First find the minimum characteristic polynomial , and then assume that the matrix function to be found is , , where it is a polynomial with degree less than . Then bring in the eigenvalues one by one , and if there are multiple roots, take the derivative.
example:
,beg
have to
2.Jordan standard method
3. Matrix differential equations
(1) Homogeneous differential equation
(2) Non-homogeneous differential equations
example:
Solve differential equations that satisfy x(0)
Let’s find a general solution first:
Seek first
have to
x(0) is c
Looking for a special solution:
merge:
7. Matrix eigenvalue estimation
(1) Line Gaelic Circle
Except for the diagonal elements in each row, the sum of the absolute values of the other elements is the boundary of the radius of the first Gaelic circle, and the center of the circle is the diagonal element.
(2) Leger circle
The principles of the column Gael circle and row Gael circle are the same.
(3) Gaelic circle isolation