B- probability theory - Bayesian decision

Newer and more full of "machine learning" to update the site, more python, go, data structures and algorithms, reptiles, artificial intelligence teaching waiting for you: https://www.cnblogs.com/nickchen121/

Bayesian decision

First, Bayesian decision theory

Bayesian decision theory: under imperfect intelligence estimate on the part of the unknown status of subjective probability.

Second, Bayesian formula

2.1 Bayesian formula is derived from the conditional probability formula

Ruoguo \ (A \) and \ (B \) independently of one another, there are \ (P (A, B) = P (A) P (B) \) , and the conditional probability formula
\ [p (A | B ) = {\ frac {p ( A, B)} {p (B)}} \\ p (B | A) = {\ frac {p (A, B)} {p (A)}} \\ \ ]
by the conditional probability can be obtained
\ [p (A, B) = p (B | A) p (A) \\ p (A | B) = {\ frac {p (B | A) p (A)} { p (B)}} \ quad \ text { abbreviated Bayesian formula} \]
\ (P (a | B) \) : posterior probability probability, the probability of occurrence of a case where a B occurs, needs to be calculated

\ (the p-(B | A) \) : likelihood, Case A assumptions established probability of occurrence B

\ (P (A) \) : A is the prior probability, the probability may be understood as generally happens A

\ (P (B) \) : standardization constants, may be understood as the probability of B occurs generally

2.2 Bayesian formula is derived from the total probability formula

Total probability formula
\ [p (B) = \ sum_ {i = 1} ^ n {p (B | A = A_i) p (A_i)} \ quad \ text { wherein} \ sum_ {i = 1} ^ n { p (A_i) = 1} \
] by total probability formula available
\ [p (A | B) = {\ frac {p (B | A) p (A)} {\ sum_ {i = 1} ^ n { p (B | a = A_i) p (A_i)}}} \ quad \ text { complete Bayesian formula} \]

Third, the application of Bayesian formula

In digital communication, since the random noise, thus receiving the emitted signal and the signal may be different, in order to determine the signal sent, typically need to calculate various probabilities.

If the transmitter signals 0 and 1 with probability 0.6 and 0.4;

When a signal 0, a probability of 0.7 and 0.2 of the received signal 0 and 1;

When the signals 1, 0.8 and 0.2 to the receiver the received signal 1 and 0.

When the calculated signal receiving unit 0 is received, the signal transmitter emits a 0 probability.

Data may be obtained by the following derivation given above

\ (P (A_0) = 0.6 \) : the probability of the signal transmitter emits 0

\ (P (A_1) = 0.4 \) : probability signal transmitter emits 1

\ (P (B) = P (A_0) P (B | A_0) + P (A_1) P (B | A_1) \) : the probability of receiving the signal transmitter 0

\ (P (B | A_0) = 0.7 \) : sends a signal transmitter receives a signal probability 0 0

\ (P (B | A_1) = 0.2 \) : probability signal transmitter emits a signal received 0

\[ \begin{align} p(A_0|B) & = {\frac{p(B|A_0)p(A_0)}{p(A_0)p(B|A_0) + p(A_1)p(B|A_1)}} \\ & ={\frac{0.6*0.7}{0.6*0.7 + 0.4*0.2}} \\ & ={\frac{0.42}{0.50}} \\ & =0.84 \end{align} \]

Guess you like

Origin www.cnblogs.com/nickchen121/p/11686765.html