Prior and posterior problems in deep learning

A priori and a posteriori

In fact, the prior, posterior, and likelihood seen in the paper are the core elements of the Bayesian statistical model.
Prior and posterior are both about the distribution of parameters.
1. Prior is the original distribution of the parameters we assume.
2. Posterior is the conditional distribution obtained after observing the training data.
3. Likelihood is completely different. On the one hand, it is about data. On the other hand, it is not normalized, so it is not a legal probability distribution.

Any unknown quantity can be regarded as a random variable. This random variable obeys a certain probability distribution, that is, a variable randomly selected from a certain probability distribution. The extracted variable is regarded as a priority. It is not a Simple value, but contains hidden information, this information is that it obeys a certain probability distribution.

Posterior is an intuitive statement of priority and likelihood, the relationship between them: posterior=likelihood*prior.

Therefore, posterior also obeys a certain distribution, which also explains why prior and posterior are both about the distribution of parameters at the beginning of the article.

Reprint reference blog: reference blog link (click the version directly)
reference blog post link (copy and paste version): https://blog.csdn.net/workerwu/article/details/8023045
Thanks for the help provided by workerwu, a more concise explanation first Post-inspection issues.

Guess you like

Origin blog.csdn.net/Leomn_J/article/details/112132618