Naive Bayes & Bayesian Networks

  • Bayesian decision theory

     In the case where all relevant probabilities are ideal, Bayesian decision theory considers the selection of the optimal label based on these probabilities and the misjudgment loss. The basic idea is as follows:

(1) Known prior probability and class conditional probability density (likelihood)

(2) Use Bayesian transformation to posterior probability

(3) Classification of decisions based on the size of the posterior probability

1. Risk minimization

Risk: According to the posterior probability, the expected loss of classifying the sample into a certain class can be obtained, that is, the "conditional risk" on the sample.

Purpose: To find the minimization of the overall risk, simply select the class label that minimizes the conditional risk on each sample

2. Minimize decision risk - maximize posterior probability

      There are two ways to obtain posterior probabilities, and machine learning is also divided into discriminative models and generative models because of these two methods.

Discriminant Model: For a given x, predict c by modeling P(c|x).

Generative model: first model the joint distribution P(c,x), and then calculate P(c|x)

  • Naive Bayes (NB)

Assumption: Attributes need to be independent of each other

algorithm:

input: training set T={(x i ,y i )|i=1...N}

output: the classification of instance x

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325164863&siteId=291194637