eigenface

PAC(主成分分析)是eigenface方法的核心。

Algorithmic Description of Eigenfaces method

Let  X={x1,x2,,xn}  be a random vector with observations  xiRd .

  1. Compute the mean  μ

    μ=1ni=1nxi

  2. Compute the the Covariance Matrix S

    S=1ni=1n(xiμ)(xiμ)T

  3. Compute the eigenvalues  λi  and eigenvectors  vi  of  S

    Svi=λivi,i=1,2,,n

  4. Order the eigenvectors descending by their eigenvalue. The  k  principal components are the eigenvectors corresponding to the  k  largest eigenvalues.

The  k  principal components of the observed vector  x  are then given by:

y=WT(xμ)

where  W=(v1,v2,,vk) .

The reconstruction from the PCA basis is given by:

x=Wy+μ

where  W=(v1,v2,,vk) .

where  W=(v1,v2,,vk) .

The Eigenfaces method then performs face recognition by:

  • Projecting all training samples into the PCA subspace.
  • Projecting the query image into the PCA subspace.
  • Finding the nearest neighbor between the projected training images and the projected query image.

Still there's one problem left to solve. Imagine we are given  400  images sized  100×100  pixel. The Principal Component Analysis solves the covariance matrix  S=XXT , where  size(X)=10000×400  in our example. You would end up with a  10000×10000  matrix, roughly  0.8GB . Solving this problem isn't feasible, so we'll need to apply a trick. From your linear algebra lessons you know that a  M×N  matrix with  M>N  can only have  N1  non-zero eigenvalues. So it's possible to take the eigenvalue decomposition  S=XTX  of size  N×N  instead:

XTXvi=λivi

and get the original eigenvectors of  S=XXT  with a left multiplication of the data matrix:

XXT(Xvi)=λi(Xvi)

The resulting eigenvectors are orthogonal, to get orthonormal eigenvectors they need to be normalized to unit length. I don't want to turn this into a publication, so please look into  for the derivation and proof of the equations.

详情见:http://docs.opencv.org/3.2.0/da/d60/tutorial_face_main.html

猜你喜欢

转载自blog.csdn.net/ChangeNew/article/details/78181160
今日推荐