The maximum a posteriori maximum likelihood

The maximum a posteriori maximum likelihood

P (x | z) = P (z | x) P (x) / P (z) ~ P (z | x) P (x)
left Bayes rule P (x | z) are commonly referred to as post posterior probability, right P (z | x) is called the likelihood, P (x) called a priori. Seeking direct posterior distribution is difficult, but seek a state of optimal estimation, so that in this state to maximize the posterior probability (MAP) after it is feasible.

xmap = arg maxP(x|z) = arg maxP(z|x)P(x)

Solving maximum posterior probability is equivalent to maximize the likelihood and prior product. If there is no a priori, that there is no a priori information on the position, at this time there is no a priori, it can be solved x maximum likelihood estimation (MLE)

xMLE = maxP arg (z | x)

Intuitively speaking, it refers to the likelihood, "In the current capital position, how likely observational data." Since we know that observational data, so to maximize the likelihood estimation can be understood as: "In what kind of state, most likely to produce the observed data now." This is the maximum likelihood estimate of intuitive sense.

Relations maximum likelihood and least squares.

Least squares parameter estimation method based on residual sum of squares minimum criteria. Because it is known that observation, solution parameters, so a maximum likelihood, and minimum residual sum of squares is subject to measurement (minimum residual sum of squares can be equivalent to the distribution of the residuals are too positive i.e., Gaussian) Maximum likelihood.

The minimum residual sum of squares and residuals are being distributed too

An arbitrary observation, will be affected by noise, i.e. will differ by a residual (error) between the observed value and the true value. Assuming that the residuals are distributed too positive (i.e., the error is a random variable, subject distribution is too):
the Y = F (X-) + V
the Y is observed, X is the parameter, V is the residual.
Residuals with mean 0 is too distribution, i.e.:
V = the Y - F (X-) ~ N (0, [sigma])

Guess you like

Origin blog.csdn.net/fb_help/article/details/94745244