Numerical calculation of eigenvalues and eigenvectors day5-

The main lesson introduces two linear equations iterative algorithm, it is a Jacobi iteration (synchronous update), a high-Saipan Del iterative (asynchronous update). For special tridiagonal system, a more simple and efficient algorithm can also be used to solve Thomas. After introducing the concept of vector norm norm of the matrix, the relative error of the numerical solution of a linear system can be determined by a number of conditions. This class introduces the feature matrix, eigenvector, and wherein several numerical algorithms involved.

1. eigenvalue and eigenvector

Given \ (n \ times n \) dimensional matrix \ (A \) , satisfies the following formula number \ (\ the lambda \) called matrix \ (A \) a characteristic value: \ [Au = \ the lambda U \ ] and the corresponding vector \ (U \) referred to the corresponding eigenvalue \ (\ the lambda \) of a feature vector. The computation can be generalized to a more general form, and not just for the matrix operation. Suppose \ (u = f (x) \) is about \ (X \) is a continuous function, \ (A = \ FRAC {D} {DX} \) represents a differential operator, then \ [\ frac {d ^ 2u } {dx ^ 2} = k ^ 2u \] can be expressed as: \ [^ a ^ K = 2U 2U \] this is an eigenvalue problem a broader representation.
Eigenvalues and eigenvectors has a very important role in engineering and science. For example, studies in the vibration characteristic value represents the natural frequency of the system or component, and the feature vector a schematic these vibrations.
To solve the eigenvalue and eigenvector of a matrix, by definition, can be obtained: \ [(A- \ the I the lambda) = U 0 \] Suppose \ (A- \ lambda I \) is nonsingular, then only the zero formula solution, solving does not meet the requirements, therefore, want to solve the characteristic values, needs\ (DET (A- \ the I the lambda) = 0 \) , known as the characteristic equation of formula, is the root of the characteristic equation of the matrix \ (A \) eigenvalues.
Example: \ [A = \ {bmatrix the begin \\}. 1 & 2. 3 & 2 \ bmatrix End {} \]
wherein equation \ [det (A- \ lambda I ) = det \ begin {bmatrix} 1- \ lambda & 2 \\ 3 & 2- \ lambda \ end {bmatrix} = (1- \ lambda) (2- \ lambda) -6 = 0 \] solving available characteristic value \ (\ lambda_1 = 4, \ lambda_2 = -1 \) However, for higher order matrices, the eigenvalue is not so simple, requires using a numerical solution method.

2. Power method and inverse power law

2.1 Power Method

This section describes how to use the power and the method for solving an inverse power law, respectively, the maximum and minimum values of the molding die for all of the eigenvalue matrix. A given matrix A \ (A \) , assuming that there are \ (\ n-) th real eigenvalues \ [| \ lambda_1 |> | \ lambda_2 |> \ cdots> | \ lambda_n | \] eigenvector corresponding to \ ( U_1, U_2, ..., U_n \) . Power is calculated by the method iterative method the maximum eigenvalue \ (\ lambda_1 \) , first randomly selected initial vector \ (x_1 \) , \ [x_1 = c_1u_1 c_2u_2 + + ... + c_nu_n \] iterative calculation \ (x_2 \) , \ [Ax_1 = c_1Au_1 + + ... + c_nAu_n c_2Au_2 = \ lambda_1c_1x_2 \] \ [U_1 + x_2 = \ {FRAC c_1 and c_2} {} \ {FRAC \ lambda_2} {\ U_2 lambda_1} + ... + \ {FRAC C_N } {c_1} \ frac {\ lambda_n} {\ lambda_1} u_n \] further calculates \ (X_3 \) , \ [Ax_2 = \ lambda_1u_1 + \ FRAC {c_2} {c_1 and} \ FRAC {\ the lambda ^ 2_2} {\ lambda_1 } u_2 + ... + \ frac { c_n} {c_1} \ frac {\ lambda ^ 2_n} {\ lambda_1} u_n = \ lambda_1x_3 \] \ [x_3 = u_1 + \ frac {c_2} {c_1} \ frac {\ lambda ^ 2_2} {\ lambda ^ 2_1} u_2 + ... + \ frac {c_n} {c_1} \ frac {\ lambda ^ 2_n} {\ lambda ^ 2_1} u_n \] can be seen that the iteration formula is: \ [X_ {K +. 1} = U_1 + \ FRAC {c_2} {c_1 and} \ FRAC {\ the lambda ^ K_2} {\ the lambda ^ k_1} U_2 +. .. + \ frac {c_n} { c_1} \ frac {\ lambda ^ k_n} {\ lambda ^ k_1} u_n \] notes \ (\ lambda_1 \) is the largest eigenvalue, so \ (\ frac {\ lambda_i {} \} lambda_i <. 1 \) , when the \ (K \) when sufficiently large, the \ (K + Ax_. 1} = {\ lambda_1 U_1, X_ {} = K +. 1 U_1 \) . In specific implementation, and no \ (\ lambda_1 \) and \ (U_1 \) values, and therefore, the iterative calculation \ ({k + 1} Ax_k x_ = \) , the normalized \ (x_ {k + 1} \) to (be further verified using vector norm and unreasonable). Note: The maximum value is characteristic fingerprints largest eigenvalues that
MATLAB implementation:

A=[4 2 -2; -2 8 1 ; 2 4 -4]
x=[1 1 1]'
for i=1:16
x=A*x; x=x/norm(x,Inf);
end
A*x./x
%%%%%%%%%%%%%%%%
A =

     4     2    -2
    -2     8     1
     2     4    -4

ans =

    7.7502
    7.7503
    7.7502

>> eig(A)

ans =

   -3.6102
    3.8599
    7.7504
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function e = MaxEig(A)
    x = ones(size(A));
    for i=1:40
        x=A*x; 
        [mx,id] = max(abs(x));
        x=x/x(id);
    end
    e = A*x./x; 
    [mx,id] = max(abs(e));
    e = e(id);
end
2.2 inverse power law

A given matrix A \ (A \) , assuming that there are \ (\ n-) th real eigenvalues \ [| \ lambda_1 |> | \ lambda_2 |> \ cdots> | \ lambda_n | \] eigenvector corresponding to \ ( U_1, U_2, ..., U_n \) . \ (\ lambda_n \) is the smallest eigenvalue, if the first note \ (Au = \ the lambda U \) , then \ (A ^ {-}. 1 A Au = ^ {-}. 1 \ U the lambda \) , i.e. \ (U = A ^ {-}. 1 \ U the lambda \) , therefore \ (A ^ {-}. 1 U = \ FRAC. 1 {{} \} the lambda U \) . Can be seen that, when the \ (\ lambda_n \) matrix \ (A \) at the minimum eigenvalue, \ (\ FRAC. 1 {{} \} lambda_n \) will be \ (A ^ {- 1} \) of largest eigenvalue, this time using the power method to solve \ (a ^ {- 1} \) is the maximum eigenvalue, taking the inverse, that is, (a \) \ is the minimum eigenvalue. Inverse power law, it is noted that, when the minimum value is 0 characterized in that the countdown is not defined, solving the inverse power law characteristic is a second small value, and shifting need inverse power law.
MATLAB implementation

function e = MinEig(A)
    invA = inv(A);
    x = ones(size(A));
    for i=1:40
        x=invA*x; 
        [mx,id] = max(abs(x));
        x=x/x(id);
    end
    e = invA*x./x; 
    [mx,id] = max(abs(e));
    e = 1/e(id);
end
%%%%%%%%%%%%%%%%%%%%
>> A = [4 0 0;0 -1 0;0 0 -9]

A =

     4     0     0
     0    -1     0
     0     0    -9

>> MinEig(A)

ans =

    -1

>> eig(A)

ans =

    -9
    -1
     4
>> MaxEig(A)

ans =

    -9

3. QR decomposition

Power method and inverse power law is used to solve the eigenvalues and the maximum eigenvalue minimum, wants to solve all of the eigenvalue matrix, QR decomposition method may be used. Suppose \ (A \) a \ (n \ times n \) matrix with \ (n-\) mutually different real eigenvalues. QR decomposition theory ensure that, if the matrix \ (A \) follows similar transformation: \ (C = B ^ {-}. 1 the AC \) , the feature value remains unchanged. This is because if \ (Au = \ the lambda U \) , then take \ (V = {C ^ -. 1} U \) , there \ (ACv = Au = \ the lambda Cv \) , there is further \ (C ^ ACv -1} = {\ the lambda V \) , so \ (\ the lambda \) is \ (C ^ {- 1} AC \) eigenvalues. To \ (A \) carried out QR decomposition algorithm flow:

  • \ (A_1 = A \) , for \ (A_1 \) do QR decomposition \ [A_1 = Q_1R_1 \] wherein \ (Q_1 \) is an orthogonal matrix ( \ (^ T_l = Q_1Q the I \) ), \ (R_1 \) is an upper triangular matrix
  • Order \ (R_1Q_1 A_2 = \) , i.e. \ (A_2 = Q ^ {-}. 1 _1A_1Q_1 \) , wherein the value of \ (A \) is the same. Further \ (A_2 \) do QR Decomposition: \ [Q_2R_2 A_2 = \]
  • \(A_3=R_2Q_2=Q^{-1}_2A_2Q_2=Q^{-1}_2Q^{-1}_1A_1Q_1Q_2\)
  • Repeat the process until the \ (A_n \) be an upper triangular matrix, characterized in value is the ones on the diagonal.

MATLAB implementation

A=[ 6 -7 2 ; 4 -5 2; 1 -1 1]
A0=A;
for i=1:40
    [Q R]=qr(A);
    A=R*Q;
end
A
ev=diag(A)
eig(A0)
%%%%%%%%%%%%%%%%
>> QRiteration

A =

     6    -7     2
     4    -5     2
     1    -1     1

A =

    2.0000   -8.6603   -7.4066
    0.0000   -1.0000   -1.0690
    0.0000   -0.0000    1.0000

ev =

    2.0000
   -1.0000
    1.0000

ans =

   -1.0000
    2.0000
    1.0000

In each iteration, the QR decomposition of matrix operation by using a special matrix Householder matrix implemented in the form of \ [H = I-2 \ frac {vv ^ T} {v ^ Tv} \ ] where \ (C + V = \ | C \ | _2e \) , \ (C \) and \ (E \) column vector, \ (\ | C \ | _2 \) 2 norm of a vector. \ (H \) matrix is symmetric, it is orthogonal. This means that \ (HAH \) and \ (A \) is similar. The following detailed description, how \ (H \) matrix \ (A \) is decomposed into a product of the orthogonal matrix and an upper triangular matrix.

  • Take \ (c_1 and \) matrix \ (A \) a first column element: \ [= c_1 and \ A_ the begin {bmatrix} {} \\ A_. 11} {21 is vdots \\\ \\ A_ {N1} \ End bmatrix} {, E_1 = \ bmatrix the begin {} \ PM1 vdots \\\ \\ 0 \\ 0 \ bmatrix End {} \] \ (E_1 \) symbol of an element with \ (c_1 and \) of a symbol element of consistency, \ (c_1 v_1 = + \ | c_1 \ | _2e \)
  • Order \ (H_1 the I-2 = \ ^ v_1v_1 FRAC {T} {} TV_1 V_1 ^ \) , \ (= Q_1 H_1, R_1 H_1A = \) , then \ (Q_1 \) is a symmetric orthogonal matrix (Householder, \ (A = Q_1R_1 \) ), note that \ [v_1 ^ T v_1 = ( c_1 + \ | c_1 \ | _2e_1) ^ T (c_1 + \ | c_1 \ | _2 e_1) = 2 \ | c_1 \ | ^ 2_2 + 2a_ {11 } \ | c_1 and \ | _2 \] \ [H_1A R_1 = = \ & H_1c_1 the begin {} bmatrix H_1 A_2 & \ & cdots H_1a_n \ bmatrix End {} \] \ [H_1c_1 = c_1-2 \ FRAC Tc_1} ^ {v_1v_1 {v_1 ^ Tv_1} = c_1- \ frac {(\ | c_1 \ | _2 ^ 2 + a_ {11} \ | c_1 \ | _2) v_1} {\ | c_1 \ | ^ 2_2 + a_ {11} \ | c_1 \ | _2} = - \ | c_1 \ | _2e_1 \] Thus \ (R_1 = H_1A \) of the first row except for the first element, the other are \ (0 \) \ [R_1 = \ bmatrix the begin {a} '_ {11} & a' _ {12} & \ cdots & a '_ {1n} \\ 0 & a' _ {22} & \ cdots & a '_ {2n} \\ \ vdots & \ vdots & \ ddots & \ vdots \\ 0 & a '_ {n2} & \ cdots &a'_{nn}\end{bmatrix}\]
  • Take \ (c_2 \) as the second column of the matrix: \ [c_2 = \ bmatrix the begin {A} 0 \\ '_} {21 is vdots \\\ \\ A' _ {N1} \} End {bmatrix, E_2 = \ \\\ the begin {0} bmatrix vdots PM1 \\\ \\ 0 \ bmatrix End {} \] \ (E_2 \) symbol and the second element (c_2 \) \ symbols second holding element consistent, \ (v_2 c_2 + = \ | c_2 \ | _2e_2 \)
  • Order \ (H_2 = the I-2 \ FRAC {v_2v_2 ^ T} {V_2 ^ Tv_2} \) , \ (Q_2 = Q_1H_2 = H_1H_2 \) , \ (R_2 = H_2R_1 = H_2H_1A \) , \ (H_2 \) is also symmetric orthogonal matrix ( \ (H ^ = a ^ T_2R_2 T_1H Q_2R_2 = \) ), similarly, \ (R_2 H_2R_1 = \) a second column of the second element are at \ (0 \) \ [R_2 = \ begin {bmatrix} a '' _ {11} & a '' _ {12} & \ cdots & a '' _ {1n} \\ 0 & a '' _ {22} & \ cdots & a '' _ {2n} \\ \ vdots & \ vdots & \ ddots & \ vdots \\ 0 & 0 & \ cdots & a '' _ {nn} \ end {bmatrix} \]
  • Repeat the process until the \ (R_ {n-1} \) be an upper triangular matrix, the matrix \ (A \) is decomposed into an orthogonal matrix \ (Q_ {n-1} \) , and an upper triangular matrix \ (R_ { n-1} \) product.

MATLAB implementation:

function [Q R] = QRFactorization(R)
% The function factors a matrix [A] into an orthogonal matrix [Q]
% and an upper-triangular matrix [R].
% Input variables:
% A  The (square) matrix to be factored.
% Output variables:
% Q  Orthogonal matrix.
% R  Upper-triangular matrix.

nmatrix = size(R);
n = nmatrix(1);
I = eye(n);
Q = I;
for j = 1:n-1
    c = R(:,j);
    c(1:j-1) = 0;
    e(1:n,1)=0;
    if c(j) > 0
        e(j) = 1;
    else
        e(j) = -1;
    end
    clength = sqrt(c'*c);
    v = c + clength*e;
    H = I - 2/(v'*v)*v*v';
    Q = Q*H;
    R = H*R;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>> A=[ 6 -7 2 ; 4 -5 2; 1 -1 1]

A =

     6    -7     2
     4    -5     2
     1    -1     1

>> [Q R] = qr(A)

Q =

   -0.8242    0.3925   -0.4082
   -0.5494   -0.7290    0.4082
   -0.1374    0.5608    0.8165

R =

   -7.2801    8.6537   -2.8846
         0    0.3365   -0.1122
         0         0    0.8165

>> [Q1 R1] = QRFactorization(A)

Q1 =

   -0.8242    0.3925   -0.4082
   -0.5494   -0.7290    0.4082
   -0.1374    0.5608    0.8165

R1 =

   -7.2801    8.6537   -2.8846
    0.0000    0.3365   -0.1122
   -0.0000    0.0000    0.8165

4. Summary

This class introduces the concept of matrix eigenvalue and eigenvector, for the lower-order matrix, the equation can be solved by the features Eigenvalue; for a high order matrix can be solved using the maximum power method eigenvalue matrix and the inverse power law and the minimum eigenvalue requires all of the eigenvalue matrix, the matrix may be a QR decomposition, the matrix \ (a \) similar into an upper triangular matrix, the diagonal elements of the upper triangular matrix is the matrix \ (a \) eigenvalues.

Guess you like

Origin www.cnblogs.com/SweetZxl/p/11274167.html