Personal understanding of the parameter gradient update method of logistic regression

Logistic regression parameter update After reading a few blog posts, I feel that I don’t understand it thoroughly, so I write it myself, hoping to have a deeper understanding. The logistic regression input is a linear function W x + b \boldsymbol{W}\boldsymbol{x}+\boldsymbol{b}Wx+b , for simple understanding, consider the case where batchsize is 1. Then enterx \boldsymbol{x}x is ann × 1 n\times1n×Vector of 1 , label y \boldsymbol{y}y we use oneHot to encode as am × 1 m\times1m×1 vector, obviously \boldsymbol{b} is also am × 1 m\times1m×1 vector, parameterW \boldsymbol{W}W is am × nm\times nm×matrix of n . Ifn = 4 n=4n=4 m = 3 m=3 m=3 , we graphically represent the logistic regression as follows:
insert image description here
here the labely \boldsymbol{y}y adopts onehot encoding with a length of 3. If the category number is 1, its encoding is{ 1 , 0 , 0 } T \{1,0,0\}^T{ 1,0,0}T , corresponding to the above figure, isy ∗ 1 = 1 y_*^1=1y1=1y ∗ 2 = 0 y_*^2=0y2=0y ∗ 3 = 0 y_*^3=0y3=0 . Loss functionLLL就是y 1 y^1y1y ∗ 1y_*^1y1The cross entropy loss + y 2 y^2y2y ∗ 2 y_*^2y2The cross entropy loss + y 3 y^3y3y ∗ 3y_*^3y3的交叉熵损失。
L = ∑ i = 1 3 y ∗ i log ⁡ y i = y ∗ 1 log ⁡ y 1 + y ∗ 2 log ⁡ y 2 + y ∗ 3 log ⁡ y 3 \begin{aligned} L&=\sum_{i=1}^3y^i_*\log{y^i}\\ &=y^1_*\log{y^1}+y^2_*\log{y^2}+y^3_*\log{y^3} \end{aligned} L=i=13yilogyi=y1logy1+y2logy2+y3logy3
上式中:
y 1 = e z 1 e z 1 + e z 2 + e z 3 y 2 = e z 2 e z 1 + e z 2 + e z 3 y 3 = e z 3 e z 1 + e z 2 + e z 3 \begin{aligned} y^1&=\frac{e^{z^1}}{e^{z^1}+e^{z^2}+e^{z^3}}\\ y^2&=\frac{e^{z^2}}{e^{z^1}+e^{z^2}+e^{z^3}}\\ y^3&=\frac{e^{z^3}}{e^{z^1}+e^{z^2}+e^{z^3}}\\ \end{aligned} y1y2y3=ez1+ez2+ez3ez1=ez1+ez2+ez3ez2=ez1+ez2+ez3ez3

z 1 = w 1 T x + b 1 z 2 = w 2 T x + b 2 z 3 = w 3 T x + b 3 \begin{aligned} z^1=\boldsymbol{w_1}^T \boldsymbol{x}+b_1\\ z^2=\boldsymbol{w_2}^T \boldsymbol{x}+b_2\\ z^3=\boldsymbol{w_3}^T \boldsymbol{x}+b_3 \end{aligned} z1=w1Tx+b1z2=w2Tx+b2z3=w3Tx+b3
其中, w 1 = { w 11 , w 12 , w 13 , w 14 } T \boldsymbol{w_1}=\{w_{11},w_{12},w_{13},w_{14}\}^T w1={ w11,w12,w13,w14}T x = { x 1 , x 2 , x 3 , x 4 } T \boldsymbol{x}=\{x_{1},x_{2},x_{3},x_{4}\}^T x={ x1,x2,x3,x4}Therefore :

Loss function LLL tow 1 \boldsymbol{w_1}w1Derivation:
∂ L ∂ w 1 = ∂ L ∂ y 1 ∂ y 1 ∂ z 1 ∂ z 1 ∂ w 1 + ∂ L ∂ y 2 ∂ y 2 ∂ z 1 ∂ z 1 ∂ w 1 + ∂ L ∂ y 3 ∂ y 3 ∂ z 1 ∂ z 1 ∂ w 1 = y 1 ∗ y 1 × y 1 ( 1 − y 1 ) × x − y 2 ∗ y 2 × y 1 y 2 × x − y 3 ∗ y 3 × y 1 y 3 × x = ( y 1 ∗ ( 1 − y 1 ) − y 2 ∗ y 1 − y 3 ∗ y 1 ) x = ( y 1 ∗ − y 1 ( y 1 ∗ + y 2 ∗ + y 3 ∗ ) ) x = ( y 1 ∗ − y 1 ) x \begin{aligned} \frac{\partial L}{\partial \boldsymbol{w_1}}&=\frac{\partial L}{\partial y_1}\frac{\partial y_1}{\partial z^1}\frac{\partial z^1}{\partial \boldsymbol{w_1}}+\frac{\partial L}{\partial y_2}\frac{\partial y_2}{\partial z^1}\frac{\partial z^1}{\partial \boldsymbol{w_1}}+\frac{\partial L}{\partial y_3}\frac{\partial y_3}{\partial z^1}\frac{\partial z^1}{\partial \boldsymbol{w_1}}\\ &=\frac{y_1^*}{y_1}\times y_1(1-y_1)\times \boldsymbol{x}-\frac{y_2^*}{y_2}\times y_1y_2\times \boldsymbol{x}-\frac{y_3^*}{y_3}\times y_1y_3\times \boldsymbol{x}\\ &=(y_1^*(1-y_1)-y_2^*y_1-y_3^*y_1)\boldsymbol{x}\\ &=(y_1^*-y_1(y_1^*+y_2^*+y_3^*))\boldsymbol{x}\\ &=(y_1^*-y_1)\boldsymbol{x}\\ \end{aligned} w1L=y1Lz1y1w1z1+y2Lz1y2w1z1+y3Lz1y3w1z1=y1y1×y1(1y1)×xy2y2×y1y2×xy3y3×y1y3×x=(y1(1y1)y2y1y3y1)x=(y1y1(y1+y2+y3))x=(y1y1)x
注意( y 1 ∗ + y 2 ∗ + y 3 ∗ ) (y_1^*+y_2^*+y_3^*)(y1+y2+y3) are the three values ​​encoded by the label onehot, and the sum is exactly 1. Similarly, the remaining two derivatives can be obtained:
∂ L ∂ w 2 = ( y 2 ∗ − y 2 ) x ∂ L ∂ w 3 = ( y 3 ∗ − y 3 ) x \frac{\partial L}{\ partial \boldsymbol{w_2}} = (y_2^*-y_2)\boldsymbol{x}\\ \frac{\partial L}{\partial \boldsymbol{w_3}} = (y_3^*-y_3)\boldsymbol{x }w2L=(y2y2)xw3L=(y3y3) x
cross entropy loss functionLLL aboutw \boldsymbol{w}w的梯度为:
[ ( y 1 ∗ − y 1 ) x 1 ( y 2 ∗ − y 2 ) x 1      ( y 3 ∗ − y 3 ) x 1 ( y 1 ∗ − y 1 ) x 2 ( y 2 ∗ − y 2 ) x 2      ( y 3 ∗ − y 3 ) x 2 ( y 1 ∗ − y 1 ) x 3 ( y 2 ∗ − y 2 ) x 3      ( y 3 ∗ − y 3 ) x 3 ( y 1 ∗ − y 1 ) x 4 ( y 2 ∗ − y 2 ) x 4      ( y 3 ∗ − y 3 ) x 4 ( y 1 ∗ − y 1 ) x 5 ( y 2 ∗ − y 2 ) x 5      ( y 3 ∗ − y 3 ) x 5 ] T \left[ \begin{aligned} &(y_1^*-y_1)x1&(y_2^*-y_2)x1\space\space\space\space&(y_3^*-y_3)x1\\ &(y_1^*-y_1)x2&(y_2^*-y_2)x2\space\space\space\space&(y_3^*-y_3)x2\\ &(y_1^*-y_1)x3&(y_2^*-y_2)x3\space\space\space\space&(y_3^*-y_3)x3\\ &(y_1^*-y_1)x4&(y_2^*-y_2)x4\space\space\space\space&(y_3^*-y_3)x4\\ &(y_1^*-y_1)x5&(y_2^*-y_2)x5\space\space\space\space&(y_3^*-y_3)x5\\ \end{aligned} \right]^T (y1y1) x 1(y1y1) x 2(y1y1) x 3(y1y1) x 4(y1y1) x 5(y2y2) x 1    (y2y2) x 2    (y2y2) x 3    (y2y2) x 4    (y2y2) x     5(y3y3) x 1(y3y3) x 2(y3y3) x 3(y3y3) x 4(y3y3) x 5 T
such that the cross entropy loss functionLLL aboutw \boldsymbol{w}The gradient of w is calculated by the outer product of numpy as:
∂ L ∂ w = numpy . outer ( x , y ∗ − y ) \frac{\partial L}{\partial \boldsymbol{w}}=numpy.outer(\ boldsymbol{x},\boldsymbol{y^*}-\boldsymbol{y})wL=numpy.outer(x,yy )
can be deduced in the same way:
∂ L ∂ b = y ∗ − y \frac{\partial L}{\partial \boldsymbol{b}}=\boldsymbol{y^*}-\boldsymbol{y}bL=yy

Guess you like

Origin blog.csdn.net/zhuzheqing/article/details/129417845