## logistic regression

Prediction function: ${h}_{\theta }\left(x\right)=g\left({\theta }^{T}x\right)=\frac{1}{1+{e}^{-{\theta }^{T}x}}$$h_{\theta}(x)=g(\theta^{T}x)=\frac{1}{1+e^{-\theta^{T}x}}$

${h}_{\theta }\left(x\right)$$h_{\theta}(x)$The value of is the probability of y=1, 1- ${h}_{\theta }\left(x\right)$$h_{\theta}(x)$is the probability that y=0.

So y~B(1, ${h}_{\theta }\left(x\right)$$h_{\theta}(x)$), a two-point distribution.

The distribution of y is listed as $p\left(\mathrm{and}\right)=\left({h}_{\theta }\left(x\right){\right)}^{\mathrm{and}}\left(1-{h}_{\theta }\left(x\right){\right)}^{1-\mathrm{and}}$$p(y) = (h_{\theta}(x))^{y}(1-h_{\theta}(x))^{1-y}$

Likelihood function $L\left(\theta \right)=\prod _{i=1}^{m}p\left(\mathrm{and}\right)$$L(\theta)=\prod_{i=1}^{m}p(y)$(meaning to maximize the probability of what has already happened)

The next step is to add log and find the process of partial derivative.

### Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324837753&siteId=291194637
Recommended
Ranking
Daily