Comparing EM with Maximum Likelihood Estimation in one sentence

Both EM and maximum likelihood estimation solve the model parameter θ to maximize the possibility of sampling, and EM relies on maximum likelihood estimation .

 

EM is sampling with implicit unknown properties, such as the root of a watermelon falling off, and the shape of the root cannot be counted.

 

The EM solution method needs to iterate:

     1) That is, given an initial θ, and then solve the probability of the hidden variable Z

     2) According to the probability of the hidden variable, and the sampled variable. Solve the maximum likelihood estimation of θ

Repeat the above two steps until convergence.

 

 

The maximum likelihood estimation is to solve the parameters of the model when the model is known, so that the probability of sampling is the largest. It is a bit similar to the known equation, and the unknown equation is solved. Therefore, the maximum likelihood estimation θ value is stable.

 

But for EM, due to the existence of unknown variables, the initial θ is often different, resulting in different final convergence results. (That is, the obtained result may be a local optimum, not a global optimum)

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326480007&siteId=291194637