Artificial Intelligence-Quantification of Uncertainty

The key contents of this part are:

  • Bayes rule
  • Bayesian network
  • Probabilistic reasoning based on hidden Markov models

There will probably be the following examination forms. The past final exam questions with answers only explain some points to pay attention to.

Bayesian Network-Independence

The answer to the second question is very detailed. I looked at ↓ for the first question, but the answer seems to be wrong.

D-separation: Determine whether variables in Bayesian network are independent - Zhihu

Bayesian network

The answer omits S, but I feel it is better to mark it as it will not affect the calculation results.

Hidden Markov model-find the probability of hidden variables based on observed variables

This question is a topic for small class discussion. To calculate the probability of hidden variables, the corrected probability (i.e. P(R|u)) should be used as the probability of it not raining that day. The most likely state is the one with high probability (raining/not raining). ). In the figure below, there is a wrong position in the calculation. When calculating the correction probability, the subsequent multiplication should be P(R_{i-1}|u_{i-1}).

Hidden Markov model - state sequence probability

This question asks for the probability of the entire sequence, so what is asked for on the first day is P(good&bar). The mood of good can definitely be determined through observation if he comes to the bar (for the conditions given in the question) , so the probability is P(good)P(bar), and P(good) is 1. Starting from the second day, the probability is based on the mood on the first day. The probability that the mood on the second day is different and he comes to the bar should be P(2-good/bad/normal&bar), here The answer does not reflect &bar, but actually multiplies the probability of barring in various moods. The last day was similar.

Guess you like

Origin blog.csdn.net/Aaron503/article/details/130937289