The difference between probability density function and maximum likelihood estimation

Probability density function (PDF)

Take the probability density function (PDF) of Gaussian distribution as an example,
\ (f (x) = \ frac {1} {\ sigma \ sqrt {2 \ pi}} e ^ {-\ frac {1} {2} \ left (\ frac {x-\ mu} {\ sigma} \ right) ^ 2} \)

After the expected value \ (\ mu \) and variance \ (\ sigma \) are determined, \ (f (x) \) is the PDF function of \ (x \) More generally, \ (f (x) \) can be regarded as a function of \ (x \) and \ (\ theta (\ mu, \ sigma) \) ,
\ (f (x; \ theta) = \ frac { 1} {\ sigma \ sqrt {2 \ pi}} e ^ {-\ frac {1} {2} \ left (\ frac {x-\ mu} {\ sigma} \ right) ^ 2} \)

Maximum likelihood estimation

Now known data set \ (X = \ {x_0, x_1, x_2, ... \} \) , so that the demand \ (f (x) \) maximizes parameter \ (\ Theta \) , at this time \ ( f (x; \ theta) \) is a function of the model parameters \ (\ theta \) ,
\ (f (\ theta) = \ frac {1} {\ sigma \ sqrt {2 \ pi}} e ^ {-\ frac {1} {2} \ left (\ frac {x-\ mu} {\ sigma} \ right) ^ 2} \)
Among all possible values ​​of \ (\ theta \) , the maximum likelihood estimation solution makes \ (f (\ theta) \) Maximized parameter value \ (\ hat {\ theta} \)

Summarize with a picture in the book of God Aurélien:
Aurélien, 2019

Guess you like

Origin www.cnblogs.com/yaos/p/12740004.html