Statistical learning method 9- EM algorithm

  • Unsupervised learning, generative model

  • EM algorithm application scenario: parameter estimation problem with hidden variables

  • EM analysis (taking Gaussian distribution as an example):
    -Suppose there is a batch of data x.
    -x may satisfy a variety of different models, and the distribution of these models obeys the probability density function form of the Gaussian distribution (Gaussian mixture model). At this time, the problem contains three variables: the probability of getting a certain model w, and the parameters μ and σ \sigmaσ under the Gaussian distribution. What we finally seek is the parameters μ and σ \sigmaσ of the best model, so the probability w of a certain model is equivalent to a hidden variable.
    -According to the maximum likelihood estimation method, the solution of the problem can be described as:

  • General: The E step of the EM algorithm is the expectation process, which seeks to estimate the expectation of the latent variable based on the data and current parameters.
    The M step of the EM algorithm is the process of iterating the data and current parameters to find the maximum expectation of the E step.

Reference:
https://blog.csdn.net/Smile_mingm/article/details/108562277?spm=1001.2014.3001.5501

Guess you like

Origin blog.csdn.net/weixin_48760912/article/details/114701189