Derivation below normal mean maximum likelihood estimation, and Bayesian estimation.
Data x1, x2, ..., xn from a normal distribution N (μ, σ2), where sigma] 2 and have been.
(1) The sample x1, ..., xn write μ maximum likelihood estimation.
(2) [mu] is the prior distribution is assumed normal distribution N (0, τ2), depending on the sample x1, ..., xn writing Bayes μ is estimated.
First seek maximum likelihood estimation Maximum Likelyhood Estimation
normal probability density function is the
objective function
for this formula is necessary to explain.
(1) Why is continually multiply? Because the sample independently, so the joint probability is the product of the probability of each individual sample.
(2) Why is the probability of N samples of the probability density function of the probability of multiplicative rather than continually multiply? In fact, strictly speaking, it should be the product of the probabilities (rather than density). Here to do a simplified. For continuous random variables, the probability of a point is zero. We are talking about the probability of a random variable equal to a value actually refers to the value of a probability ε in a small field this value when ε is small, can be approximated that:
due 2 and ε are constants, when discussing joint distribution the size of the probability, two values can be ignored, can be used in place of a probability density function.
We can see the maximum likelihood estimate is the mean of n samples.
Normal maximum likelihood estimate is derived
Guess you like
Origin blog.csdn.net/greatwall_sdut/article/details/104658383
Recommended
Ranking