Introduction to generative learning algorithm (a) of ----

So far, we focus on a given \ (x \) to \ (y \) conditional distribution \ (p (y | x; \ theta) \) to model learning algorithm. For example, for the logical Stick (Logistic) regression model, this conditional probability \ (H _ {\ Theta} (X) = Sigmoid \ left (\ Theta ^ {X} T \ right) \) . For binary classification, logistic regression and Perceptron algorithm to find a straight line, which is the decision boundary, as much as possible to separate these two classes. This direct distribution as the learning condition \ (P (Y | X) \) (Logistic regression model), or from the input space \ (\ mathcal {X} \ ) into the algorithm (Perceptron algorithm) output map called discriminant learning algorithm ( discriminative Learning algorithms).

Now we will discuss a different type of learning algorithm. For example, existing binary classification, the distinction between an elephant and a dog. This algorithm then look at the elephant looks like, then build a model to look like an elephant, and then look at the dog looks like, look like a dog for re-establishment of a model. Finally came a new animal, the algorithm put the animal model and elephants and dogs were matched model, look at this new animal is more like more like a kind of animal.

Direct like this \ | (p (x y) \) and \ (the p-(the y-) \) ( Priors , class Priors algorithm) modeling called generative learning algorithm ( Generative Learning algorithms). For example, if $ y = 0 $ represents a sample yes dog, an elephant or $ y = 1 $, the \ (p (x | y = 0) \) of the characteristic distribution model dogs, and \ ( p (x | y = 1) \) characteristics of elephant distribution model. With these two probabilities, we can use the following Bayes rule (Bayes rule) launched the given \ (the X-\) , \ (the y-\) posterior probability distribution (Distribution's posterior), \
[\ {the begin equation} p (y | x)
= \ frac {p (x | y) p (y)} {p (x)} \ end {equation} \] where the denominator can be expressed as \ (p (x) = p ( X | Y =. 1) P (Y =. 1) + P (X | Y = 0) P (Y = 0) \) . That can be expressed as we've learned \ (p (x | y) \) and \ (p (y) \) form. In fact, if we are going to do forecasting by calculating the conditional probability, we need to calculate the denominator, because,
\[ \begin{equation} \begin{aligned} \arg \max _{y} p(y | x) &=\arg \max _{y} \frac{p(x | y) p(y)}{p(x)} \\ &=\arg \max _{y} p(x | y) p(y) \end{aligned} \end{equation} \]

https://www.pornhub.com/view_video.php?viewkey=ph5d349aa97e027 imprisonment process is a bit long, to learn Japanese

Guess you like

Origin www.cnblogs.com/qizhien/p/11567562.html
Recommended