Recently busy with some stuff in face recognition, again looked under the PCA. Suddenly I let the origin of the covariance matrix, eigenvalues and eigenvectors, and why the comparison of interest will be used in the PCA (later discovered that these are used in many algorithms, such as for character recognition MQDF, etc.).
I read a lot of information on the PCA interpretation and found the following: http://blog.codinglabs.org/articles/pca-tutorial.html talking about the most user-friendly. And explained in detail why the introduction of covariance, eigenvalues and eigenvectors. After reading, I believe we can have a deeper understanding of the principles of algorithms and their physical meaning.
Reproduced in: https: //www.cnblogs.com/ImageVision/p/3592008.html