逻辑回归(对数几率回归,Logistic Regression)

什么是逻辑回归?

逻辑回归的实质是事件发生的概率除以事件不发生的概率后取对数。这种变换,将自变量和因变量的关系变成了线性的。逻辑回归经常被用于二分类问题。当然,它也是可以应用与多分类问题的。下面我们一步一步来看逻辑回归的数学推导。

逻辑回归的主要思想

首先,我们定义样本 x x 的类后验概率估计: p ( y = 1 x ) p(y=1|x) p ( y = 0 x ) p(y=0|x)
则下面的对数几率函数反映了 x x 作为正例的相对可能性:
l n p ( y = 1 x ) p ( y = 0 x ) = w T x + b ( 1 ) ln{\frac{p(y=1|x)}{p(y=0|x)}}=w^Tx+b \quad(1)
显然,我们可以得到:
p ( y = 1 x ) = e w T x + b 1 + e w T x + b p(y=1|x)=\frac{e^{w^Tx+b}}{1+e^{w^Tx+b}}
如果我们将其转化为Sigmoid的形式,则:
p ( y = 1 x ) = 1 1 + e ( w T x + b ) ( 2 ) p ( y = 0 x ) = 1 1 + e w T x + b ( 3 ) \begin{aligned} &p(y=1|x)=\frac{1}{1+e^{-({w^Tx+b)}}}\qquad&(2)\\ &p(y=0|x)=\frac{1}{1+e^{{w^Tx+b}}}\qquad&(3)\\ \end{aligned}
有了上面的两个概率,我们就可以用极大似然估计法去估计参数 w w b b 了。为了简化计算,我们将采用对数似然(log-likelihood):
l ( w , b ) = i = 1 m l n [ p ( y i x i , w , b ) ] ( 4 ) { ( x i , y i ) } i = 1 m l(w,b)=\sum_{i=1}^{m}ln{[p(y_i|x_i,w,b)]}\qquad(4)\\ 给定数据集\{(x_i,y_i)\}_{i=1}^m
上述最大似然函数是为了找到令各个样本属于其真实标记的可能性最大的参数 w w b b


如何求得参数 w w b b

为了方便讨论,令 β = ( w ; b ) , x ^ ( x ; 1 ) \beta=(w;b),\hat x(x;1) , 假设每个样本用 n n 个特征,下面的式子中 x i x_i 表示样本 x x 的第 i i 个特征。此时我们有:
w T + b = β T x ^ = w 1 x 1 + w 2 x 2 + w 3 x 2 + . . . + w n x n + b x ^ = ( x 1 x 2 x n 1 ) β = ( w 1 w 2 w n b ) \begin{aligned} &w^T+b=\beta^T\hat x=w_1x_1+w_2x_2+w_3x_2+...+w_nx_n+b\\ &\hat x=\left( \begin{array}{c} x_1\\ x_2\\ \vdots\\ x_n\\ 1 \end{array} \right) \qquad \beta=\left( \begin{array}{c} w_1\\ w_2\\ \vdots\\ w_n\\ b\\ \end{array} \right) \end{aligned}
再令:
p 1 ( x ^ ; β ) = p ( y = 1 x ^ , β ) = e β T x ^ 1 + e β T x ^ p 0 ( x ^ ; β ) = p ( y = 0 x ^ , β ) = 1 1 + e β T x ^ \begin{aligned} &p_1(\hat x;\beta)=p(y=1|\hat x,\beta)=\frac{e^{\beta^T\hat x}}{1+e^{\beta^T\hat x}}\\ &p_0(\hat x;\beta)=p(y=0|\hat x,\beta)=\frac{1}{1+e^{\beta^T\hat x}} \end{aligned}

p ( y i x i , w , b ) = y i p 1 ( x ^ i ; β ) + ( 1 y i ) p 0 ( x ^ i ; β ) ( 5 ) x ^ i i y i i p(y_i|x_i,w,b)=y_ip_1(\hat x_i;\beta)+(1-y_i)p_0(\hat x_i;\beta)\qquad(5)\\ 此处\hat x_i表示第i个样本,y_i表示第i个样本的取值\\
分析式子(5):
y i = 1 , p ( y i = 1 x i , w , b ) = p 1 ( x ^ i ; β ) y i = 0 , p ( y i = 0 x i , w , b ) = p 0 ( x ^ i ; β ) 当y_i=1时, p(y_i=1|x_i,w,b)=p_1(\hat x_i;\beta)\\ 当y_i=0时, p(y_i=0|x_i,w,b)=p_0(\hat x_i;\beta)
将式(2),(3),(5)代入式(4)可得:
l ( w , b ) = i = 1 m l n ( y i e β T x ^ i 1 + e β T x ^ i + 1 y i 1 + e β T x ^ i ) = i = 1 m [ l n ( y i e β T x ^ i + 1 y i ) l n ( 1 + e β T x ^ i ) ] \begin{aligned} l(w,b)&=\sum_{i=1}^{m}ln(\frac{y_i\cdot e^{\beta^T\hat x_i}}{1+e^{\beta^T \hat x_i}}+\frac{1-y_i}{1+e^{\beta^T \hat x_i}})\\ &=\sum_{i=1}^{m}[ln(y_i\cdot e^{\beta^T\hat x_i}+1-y_i)-ln(1+e^{\beta^T \hat x_i})] \end{aligned}
y i = 1 , l n ( y i e β T x ^ i + 1 y i ) = l n ( 1 ) = 0 y i = 0 , l n ( y i e β T x ^ i + 1 y i ) = l n ( e β T x ^ i ) = β T x ^ i l n ( y i e β T x ^ i + 1 y i ) = y i β T x ^ i \begin{aligned} &当y_i=1时,ln(y_i\cdot e^{\beta^T\hat x_i}+1-y_i)=ln(1)=0 \\ &当y_i=0时, ln(y_i\cdot e^{\beta^T\hat x_i}+1-y_i)=ln(e^{\beta^T\hat x_i})=\beta^T\hat x_i\\ &\Rightarrow ln(y_i\cdot e^{\beta^T\hat x_i}+1-y_i)=y_i\cdot \beta^T\hat x_i \end{aligned}
由于最大化似然函数 \Leftrightarrow 最小化似然函数的负数。因此,我们的目标是最小化
l ( β ) = i = 1 m ( y i β T x ^ i + l n ( 1 + e β T x ^ i ) ) ( 6 ) -l(\beta)=\sum_{i=1}^{m}(-y_i\cdot \beta^T\hat x_i+ln(1+e^{\beta^T \hat x_i}))\qquad(6)
式子(6)是关于 β \beta 的可导凸函数。根据凸优化理论,用梯度下降法,牛顿法即可求得其最优解:
β = arg min β    l ( β ) ( 7 ) \beta^*=\mathop{\arg\min}_{\beta} \ \ -l(\beta) \qquad(7)\\
至此,我们已经完成了逻辑回归的所有数学推导。接下来,就是如何求得式(7)的解。

牛顿法

以牛顿法为例,其 t + 1 t+1 轮的迭代更新公式为:
β t + 1 = β t 2 ( l ( β ) ) β β T 1 ( l ( β ) ) β ( 8 ) \beta^{t+1}=\beta^t-\lgroup\frac{\partial^2(-l(\beta))}{\partial\beta\cdot \partial\beta^T}\rgroup^{-1}\frac{\partial(-l(\beta))}{\partial\beta}\qquad(8)
l ( β ) -l(\beta) 的一阶导数:
( l ( β ) ) β = i = 1 m ( y i β T x ^ i + l n ( 1 + e β T x ^ i ) ) β = i = 1 m ( y i x ^ i + x ^ i e β T x ^ i 1 + e β T x ^ i ) = i = 1 m x ^ i ( y i e β T x ^ i 1 + e β T x ^ i ) = i = 1 m x ^ i ( y i p 1 ( x ^ i ; β ) ) ( 9 ) \begin{aligned} \frac{\partial(-l(\beta))}{\partial\beta}&=\frac{\partial \sum_{i=1}^{m}(-y_i\cdot \beta^T\hat x_i+ln(1+e^{\beta^T \hat x_i}))}{\partial \beta}\\ &=\sum_{i=1}^{m}(-y_i\cdot \hat x_i+\frac{\hat x_i\cdot e^{\beta^T\hat x_i}}{1+e^{\beta^T\hat x_i}})\\ &=-\sum_{i=1}^{m}\hat x_i(y_i-\frac{e^{\beta^T\hat x_i}}{1+e^{\beta^T\hat x_i}})\\ &=-\sum_{i=1}^{m}\hat x_i(y_i-p_1(\hat x_i;\beta))\qquad(9) \end{aligned}
l ( β ) -l(\beta) 的二阶导数:
2 ( l ( β ) ) β β T = [ i = 1 m x ^ i ( y i e β T x ^ i 1 + e β T x ^ i ) ] β β T = i = 1 m   x ^ i x ^ i T e β T x ^ i ( 1 + e β T x ^ i ) x ^ i T e β T x ^ i e β T x ^ i ( 1 + e β T x ^ i ) 2 = i = 1 m   x ^ i x ^ i T e β T x ^ i 1 + e β T x ^ i 1 1 + e β T x ^ i = i = 1 m   x ^ i x ^ i T p 1 ( x ^ i ; β ) ( 1 p 1 ( x ^ i ; β ) ) ( 10 ) \begin{aligned} \frac{\partial^2(-l(\beta))}{\partial\beta\cdot \partial\beta^T}&=\frac{\partial[-\sum_{i=1}^{m}\hat x_i(y_i-\frac{e^{\beta^T\hat x_i}}{1+e^{\beta^T\hat x_i}})]}{\partial\beta\cdot \partial\beta^T}\\ &=\sum_{i=1}^{m}\ \hat x_i\cdot \frac{\hat x_i^T\cdot e^{\beta^T\hat x_i}\cdot(1+e^{\beta^T \hat x_i})-\hat x_i^T\cdot e^{\beta^T\hat x_i}\cdot e^{\beta^T\hat x_i}}{(1+e^{\beta^T \hat x_i})^2}\\ &=\sum_{i=1}^{m}\ \hat x_i\cdot \hat x_i^T\cdot \frac{e^{\beta^T\hat x_i}}{1+e^{\beta^T \hat x_i}}\cdot\frac{1}{1+e^{\beta^T \hat x_i}}\\ &=\sum_{i=1}^{m}\ \hat x_i\cdot \hat x_i^T\cdot p_1(\hat x_i;\beta)\cdot(1-p_1(\hat x_i;\beta)) \qquad(10) \end{aligned}
以上三个式子一直迭代至前后轮的差值小于某一阈值为止。即 β t + 1 β t < ε \beta^{t+1}-\beta^t < \varepsilon

模型使用

当我们遇到新的样本时想要判断其属于1类还是0类。只需用上面得到的 β \beta 计算 p 1 ( x ^ i ; β ) p_1(\hat x_i;\beta) 即可。若得到的结果大于0.5则取1类,小于0.5则取0类。
p 1 ( x ^ ; β ) = p ( y = 1 x ^ , β ) = e β T x ^ 1 + e β T x ^ p 1 ( x ^ ; β ) = p ( y = 1 x ^ , β ) = 1 1 + e ( β T x ^ ) \begin{aligned} &p_1(\hat x;\beta)=p(y=1|\hat x,\beta)=\frac{e^{\beta^T\hat x}}{1+e^{\beta^T\hat x}}\\ &或\\ &p_1(\hat x;\beta)=p(y=1|\hat x,\beta)=\frac{1}{1+e^{-(\beta^T\hat x)}}\\ \end{aligned}

Python代码实现

https://github.com/Haifei-ZHANG/machine-learning-algorithms/tree/master/Logistic Regresssion

猜你喜欢

转载自blog.csdn.net/zhfplay/article/details/86730512
今日推荐