Normal maximum likelihood estimate is derived

Derivation below normal mean maximum likelihood estimation, and Bayesian estimation.
Data x1, x2, ..., xn from a normal distribution N (μ, σ2), where sigma] 2 and have been.
(1) The sample x1, ..., xn write μ maximum likelihood estimation.
(2) [mu] is the prior distribution is assumed normal distribution N (0, τ2), depending on the sample x1, ..., xn writing Bayes μ is estimated.
First seek maximum likelihood estimation Maximum Likelyhood Estimation
normal probability density function is the
Here Insert Picture Description
objective function
Here Insert Picture Description
for this formula is necessary to explain.
(1) Why is continually multiply? Because the sample independently, so the joint probability is the product of the probability of each individual sample.
(2) Why is the probability of N samples of the probability density function of the probability of multiplicative rather than continually multiply? In fact, strictly speaking, it should be the product of the probabilities (rather than density). Here to do a simplified. For continuous random variables, the probability of a point is zero. We are talking about the probability of a random variable equal to a value actually refers to the value of a probability ε in a small field this value when ε is small, can be approximated that:
Here Insert Picture Description
due 2 and ε are constants, when discussing joint distribution the size of the probability, two values can be ignored, can be used in place of a probability density function.
Here Insert Picture Description
Here Insert Picture Description
We can see the maximum likelihood estimate is the mean of n samples.

Published 14 original articles · won praise 4 · views 20000 +

Guess you like

Origin blog.csdn.net/greatwall_sdut/article/details/104658383