A small note on knowledge points of linear algebra

Similarity matrix

Definition: If matrix \ (\ text A \) exists matrix \ (\ text B \) and invertible matrix \ (\ Phi \) satisfies \ (\ text B = \ Phi ^ {-1} \ text A \ Phi \ ) , Then we say that \ (\ text A \) is similar to \ (\ text B \) , and the similarity of \ (\ text A \ sim \ text B \)
has the following properties:

  1. Reflexivity: \ (\ text A \ sim \ text A \)
  2. Symmetry: If \ (\ text A \ sim \ text B \) , then \ (\ text B \ sim \ text A \)
  3. Transitivity: If \ (\ text A \ sim \ text B, \ text B \ sim \ text C \) , then \ (\ text A \ sim \ text C \)

The similarity matrix has the following properties:

  1. Both rank equal
  2. Determinant values ​​are equal
  3. Both have the same eigenvalues, although the corresponding eigenvectors are generally different
  4. Both have the same characteristic polynomial
  5. Both are the same in reversibility, if both are invertible, then the inverse matrix of the two is also similar

The blogger only proves the fourth point:

\[\begin{aligned} 0=|\lambda\text E-\text B|=&|\Phi^{-1}\lambda\Phi-\Phi^{-1}\text A\Phi|\\ =&|\Phi^{-1}(\lambda\text E-\text A)\Phi|\\ =&|\Phi^{-1}|\times|\lambda\text E-\text A|\times|\Phi|\\ =&0\\ \Rightarrow&|\lambda\text E-\text A|=0 \end{aligned} \]

Matrix diagonalization

Suppose a matrix \ (\ text A \) is similar to the diagonal matrix \ (\ text B \) , then there is

\[\begin{aligned} \text A^m=&(\Phi^{-1}\text B\Phi)^m\\ =&\Phi^{-1}\text B(\Phi\text A\Phi^{-1})^{m-1}\Phi\\ =&\Phi^{-1}\text B^m\Phi \end{aligned} \]

The problem is converted to the bridge matrix \ (\ Phi \) , which depends on the ability
and the theorem that may be used is as follows:
If \ (n * n \) matrix \ (\ text A \) can be diagonal to the matrix If and only if \ (\ text A \) has \ (n \) linearly independent feature vectors \ (v_1, v_2 ... v_n \) , and \ (\ Phi_ {i, j} = v_ {j , i} \) .
The problem is to quickly find the feature vector, then consider how to quickly find the characteristic polynomial of the matrix.

Find the matrix characteristic polynomial

Let ’s start with a no-brainer approach, use \ (dft + \) high elimination \ (+ idft \) directly , time complexity \ (O (n ^ 4) \) , and use interpolation or Bluestein

Then consider using a similar matrix to optimize the operation. Let ’s first let \ (suncongbo \) captain educate me:
suppose the initial matrix is \ (\ text A \) , we give \ (\ text A \) each time , multiply by An elementary row transformation matrix and then multiply it by an elementary column transformation matrix.
Consider what matrix can be eliminated in the end, and find that it is the Shanghai Senborg matrix, because if you use the \ (i \) row to eliminate the other row row \ (i \) , then it will affect the first row when doing column transformation. \ (i \) line, this did not achieve the desired effect, so the line \ (i + 1 \) was used to eliminate, and the final elimination was the Shanghai Senborg matrix. How can this gadget quickly find characteristic polynomials? After a simple observation, it is found that only two columns in the last row have values, so which column is selected by the enumeration, if the column \ (n \) is selected, it is \ ((\ lambda, n) \) multiplied by the characteristic polynomial of the upper left sub-matrix , If you select the \ (n−1 \) column, then there must be several \ (n−i \) rows and select the \ (n−i-1 \) column, and then there will be a row \ (j \ ) Select column \ (n \) , because

\ (a_ {n−1, n−2}, a_ {n−2, n−3} ... a_ {j + 1, j}, a_ {j, n} \) do not contain variables \ (\ lambda \) so just recurse directly, complexity \ (O (n ^ 3) \) .

However, when I first understood the above approach, I thought that I would first multiply all the elementary row transformation matrices first, and then multiply all the elementary column transformation matrices to the right. Here is a brief mention.
Each time with consideration of the \ (I \) to go Dissipation \ (J \) line \ ((j> i) \ ) of \ (n-i + 1 \ ) when the row, then by the inverse matrix Although The preceding column has an effect but does not change the shape of the matrix, so the eliminated matrix satisfies \ (\ forall i + j> n + 1, a_ {i, j} = 0 \) .
Consider seeking the current matrix, Using the idea of ​​block matrix, it can be recursively sub-problem, the complexity is \ (O (n ^ 3) \) .

Guess you like

Origin www.cnblogs.com/ldxcaicai/p/12727745.html