Pattern Recognition (a) Bayesian decision theory

I taught myself to let seniors watermelon book, I would like to find an excuse to be lazy about the review said pattern recognition week but in fact I have a little guilty

1 Several concepts

Priori probability: the probability of something happening
class conditional probability density: something has not happened yet, but there are threatened, find the probability of occurrence of the matter
posterior probability: After the event, explore the probability of a cause

2 several common decision rules (require a priori probability, conditional probability to obtain class posterior probability)

Based on Bayesian decision smallest error rate (requires prior probability, conditional probability class)

Is the maximum a posteriori classifier, employing words of (the size of the posterior probability basis) to find that the most likely area classification

Discrimination based on minimum risk risk

He joined the risk matrix

2.2.3 determining a maximum likelihood value (required priori probability class conditional probability)

1, is determined by the minimum error rate target
is actually a priori probability ratio and comparing the class conditional probabilities
2, the target rate is determined by the minimum risk of
the same ratio is the ratio of the prior probability of the class conditional probability compare Variant

2.2.4Neyman-Pearson discrimination rules

If a mistake is more important than another error, while maintaining significant error rate remains unchanged, so that another minimum error rate

3 parameters probability density function estimate

Bayesian decision theory requires the prior probability, conditional probability is obtained after class posterior probability. However, the class conditional probability density function estimation is difficult, the distribution function of the parameters of this section is estimated by the sample.

3.2 The maximum likelihood estimate

Can be taken such that the maximum likelihood function value takes the maximum value

3.3 Bayesian estimation and Bayesian learning

After obtaining the prior probability, posterior probability, probability parameters of the Method minimal risk

3.4 nonparametric estimation

Starting from the sample directly infer the probability density function. Estimated by discrete sample density at any point

Released three original articles · won praise 0 · Views 78

Guess you like

Origin blog.csdn.net/qq_43110298/article/details/102650393