Maximum likelihood estimation (MLE) and the maximum a posteriori (MAP)

Maximum likelihood estimation (MLE) and the maximum a posteriori (MAP) are the frequencies school and Bayesian (statisticians divided into two schools, schools of thought frequency parameter is not random, and Bayesian parameter considered is a random variable) parameter estimation method, the following case study we analyze linear regression were briefly MLE and MAP, respectively, and the relationship between them and the least-squares regression, regularized least squares regression analysis of the relationship. (Very professional and serious, only we hope that by the most direct way to help beginners understand these two estimates).

Linear regression problem: given the observation data (machine learning often called training set) $ S = \ {x_i, y_i \} _ {i = 1} ^ N, x_i \ in R ^ m, y_i \ in R $, we S wants to use $ $ $ obtained from a $ R ^ m R & lt $ $ function to in some way to express the relationship between the X $ $ $ Y $ and, further to achieve a given arbitrary value $ X $, corresponding to the predicted the $ y $ values. For simplicity, we generally assume that this function has the following expression $$ y = w ^ Tx + \ epsilon, \ epsilon \ sim N (0, \ sigma ^ 2), $$ where $ w \ in R ^ m $ is we need to use the parameter $ S $ determined. Here we do not consider the first item bias or bias can also be expanded by $ x $ 1 included in this model. Here we were to determine the value of $ w $ by MLE and MAP.

Maximum likelihood estimation (MLE):  maximum likelihood estimation considered the observations $ the y- $ is a distribution $ p (y | x, w ) $ generated by sampling, that is a $ w $ values can be determined a $ p (y | x, w ) $, and thus identify a $ Y $ samples. It follows that $ the y- $ is the result, and $ w $ is the cause. Now the results have been happening, we need to determine the cause. So we look for the most likely reason for making this result occurs, ie maximizing results $ the y- $ probability of occurrence.

Guess you like

Origin www.cnblogs.com/XiangGu/p/12368341.html