Probability -- 01

  1. Additive law: The probability of the union of two sets is equal to the sum of the individual probabilities of each event minus the probabilityy of their intersection.
  2. Bayes’ Rule: The condition probability of A given B equals the probability of their intersection over the probability of event B using the multiplication rule.
  3. Medicine is one of the fields that most often uses Bayes’ Theorem to compute and compare the conditional probabilities between symptoms to extract valuable insights.
  4. Distribution shows the possible values a variable can take and how frequently they occur.
  5. Probability distribution or simply probabilities measure the likelihood of an outcome depending on how often it is featured in the sample space.
  6. Two Characteristics: Mean and Variance. Mean of the distribution is its average value. Variance on the other hand is essentially how spread out the data is. The more dispersed the data is the higher its variance will be.
  7. Population data is the formal way of referring to all the data.
  8. The sample data is just part of it.
  9. Standard deviation is simply the positive square root of variance as you may suspect. Standard deviation is measured in the same units as the mean. We can directly interpret it and is often preferable.
  10. Variance equals the expected value of the squared difference from the mean for any value. Denote this as sigma squared equals the expected value of Y minus Mew squared after some simplification.
  11. Use the letter “s” to denote the sample standard deviation.
  12. X -> variable. ~ -> Tilde. N -> Type
  13. Dicrete Distribution: Events with only two possible outcomes, True of False.
  14. Binomial Distribution: Two outcomes per iteration, and Many iterations.
  15. Poisson distribution: Test out how unusual an event frequency is for a given interval.
  16. Continuous Distributions: We are dealing with continuous outcomes the probabilities distribution would be curve as opposed to unconnected individual bars.
  17. Normal Distribution: the outcomes of many events in nature closely resembled this distribution.
  18. Chi-Squared: It is the first asymmetric continuous distribution we are dealing with as it only consists of non negative values. That means that the chi square distribution always starts from zero on the left depending on the average and maximum values within the set the curve of the chi squared graph is typically skewed to the left.
  19. The Chi-Squared does not often mirror real life events. However, it is often used in hypothesis testing to help determine goodness of fit.
  20. Exponential distribution: It is usually present when we are dealing with events that are rapidly changing early on.
  21. The last continuous distribution is the logistic distribution. We often find it useful in forecase analysis when we try to determine a cutoff point for a successful outcome.
  22. We group distribution based on similar features they posses, since we know specific elegant statistics about the most common ones.
  23. Discrete Distributions have finitely many distinct outcomes.
  24. Intervals: we can simply add up the probabilities for all the values that fall within that range.
  25. One Peculiarity of discrete events is that the probability of Y being less than or equal to y equals the probability of Y being less than y plus 1.
  26. The probability of the intervals is simply the total likelihood of any of the values within the interval occurring.
  27. U to define a uniform distribution followed by the range of the values in the dataset. Uniform Distribution are ones where all outcomes have equal probability. One such event is rolling a single standard six sided die when we roll a standard six sided die we have equal chance of getting any value from one to six. The graph of the probability distribution would have six equally tall bars all reaching up to one sixth many events and gambling provide such odds where each individual outcome is equally likely.
  28. The main takeaway is that when an event is following the uniform distribution each outcome is equally likely. Therefore both the mean and the variance are uninterpretable and posses no predictive power whatsoever.
  29. Sadly the uniform is not the only discrete distribution for which we cannot construct useful prediction intervals.

猜你喜欢

转载自blog.csdn.net/BSCHN123/article/details/103566270