Analysis of the situation where the batchsize of multiple linear regression is N

I introduced the case of unary linear regression batchsize=1 and N, now let's discuss the scalar yy label in multiple linear regressiony , there are M attributes, use{ x 1 , ⋯ , xi } i ∈ M \{x_1,\cdots,x_i\} i \in M{ x1,,xi}iM means that there are M parameters in this way, usew 1 , ⋯ , wi {w_1,\cdots,w_i}w1,,wiexpress. The specific representation is as follows:
y = b + w 1 x 1 + ⋯ + wixiy=b+w_1x_1+\cdots+w_ix_iy=b+w1x1++wixi
In order to express it more concisely with a vector, the parameter bbb asw 0 w_0w0, so that the expression can be written as:
y = w 0 + w 1 x 1 + ⋯ + wixi \begin{aligned} y=w_0+w_1x_1+\cdots+w_ix_i \end{aligned}y=w0+w1x1++wixi
A vector can be used to represent w = { w 0 , ⋯ , wi } T \boldsymbol{w}=\{w_0,\cdots,w_i\}^Tw={ w0,,wi}T x = { 1 , x 1 , ⋯   , x i } T \boldsymbol{x}=\{1,x_1,\cdots,x_i\}^T x={ 1,x1,,xi}T , such thaty = w T xy=\boldsymbol{w}^T\boldsymbol{x}y=wT x.
For the sake of simplicity, we still consider the case where the batchsize is 1, then the loss functionLLL is expressed as:
L = 1 2 ( y − y ∗ ) 2 = 1 2 ( w 0 + w 1 x 1 ∗ + ⋯ + wixi ∗ − y ∗ ) 2 \begin{aligned} L&=\ frac{1}{2}(yy^*)^2\\ &=\frac{1}{2}(w_0+w_1x_1^*+\cdots+w_ix_i^*-y^*)^2 \end{aligned }L=21(yy)2=21(w0+w1x1++wixiy)2
Loss function LLL versusw \boldsymbol{w}The components of w are partial derivatives:
∂ L ∂ w 0 = ( w 0 + w 1 x 1 ∗ + ⋯ + wixi ∗ − y ∗ ) ∗ 1 \frac{\partial{L}}{\partial{w_0}} =(w_0+w_1x_1^*+\cdots+w_ix_i^*-y^*)*1w0L=(w0+w1x1++wixiy)1
∂ L ∂ w 1 = ( w 0 + w 1 x 1 ∗ + ⋯ + w i x i ∗ − y ∗ ) x 1 ∗ \frac{\partial{L}}{\partial{w_1}}=(w_0+w_1x_1^*+\cdots+w_ix_i^*-y^*)x_1^* w1L=(w0+w1x1++wixiy)x1

∂ L ∂ w i = ( w 0 + w 1 x 1 ∗ + ⋯ + w i x i ∗ − y ∗ ) x i ∗ \frac{\partial{L}}{\partial{w_i}}=(w_0+w_1x_1^*+\cdots+w_ix_i^*-y^*)x_i^* wiL=(w0+w1x1++wixiy)xi
So the loss function about w \boldsymbol{w}w的梯度为:
∇ L = { ∂ L ∂ w 0 , ⋯   , ∂ L ∂ w i } T = ( w T x ∗ − y ∗ ) x ∗ \begin{aligned} \nabla L&=\{\frac{\partial{L}}{\partial{w_0}},\cdots,\frac{\partial{L}}{\partial{w_i}}\}^T\\ &=(\boldsymbol{w}^T\boldsymbol{x}^*-y^*)\boldsymbol{x}^* \end{aligned} L={ w0L,,wiL}T=(wTxy)x
Set the step size step, and the parameter update method is as follows:
wnew = w − step ∗ ∇ L \boldsymbol{w}_{new}=\boldsymbol{w}-step*\nabla Lwnew=wstepL
Next consider the case where the batchsize is N, then the loss functionLLL可表示为:
L = ∑ j = 1 N 1 2 ( y j − y j ∗ ) 2 = ∑ j = 1 N 1 2 ( w 0 + w 1 x 1 j ∗ + ⋯ + w i x i j ∗ − y j ∗ ) 2 = ∑ j = 1 N 1 2 ( x ∗ T w − y j ∗ ) 2 = 1 2 ( A w − y ∗ ) T ( A w − y ∗ ) \begin{aligned} L&=\sum_{j=1}^{N}\frac{1}{2}(y^j-y^{j*})^2\\ &=\sum_{j=1}^{N}\frac{1}{2}(w_0+w_1x_1^{j*}+\cdots+w_ix_i^{j*}-y^{j*})^2\\ &=\sum_{j=1}^{N}\frac{1}{2}(\boldsymbol{x^*}^T\boldsymbol{w}-y^{j*})^2\\ &=\frac{1}{2}(A\boldsymbol{w}-\boldsymbol{y^*})^T(A\boldsymbol{w}-\boldsymbol{y^*})\\ \end{aligned} L=j=1N21(yjyj)2=j=1N21(w0+w1x1j++wixijyj)2=j=1N21(xTwyj)2=21(Awy)T (Awy)

Loss function LLL versusw \boldsymbol{w}The components of w are partial derivatives respectively:
∂ L ∂ w 0 = ∑ j = 1 N ( w 0 + w 1 x 1 j ∗ + ⋯ + w i x i j ∗ − y j ∗ ) ∗ 1 = ∑ j = 1 N w 0 + ∑ j = 1 N w 1 x 1 j ∗ + ⋯ + ∑ j = 1 N w i x i j ∗ − ∑ j = 1 N y j ∗ = w 0 e T e + w 1 e T x 1 ∗ + ⋯ + w i e T x i ∗ − e T y ∗ = e T ( w 0 e + w 1 x 1 ∗ + ⋯ + w i x i ∗ − y ∗ ) = e T ( [ e , x 1 ∗ , ⋯   , x i ∗ ] w − y ∗ ) = e T ( A w − y ∗ ) \begin{aligned} \frac{\partial{L}}{\partial{w_0}}&=\sum_{j=1}^{N}(w_0+w_1x_1^{j*}+\cdots+w_ix_i^{j*}-y^{j*})*1\\ &=\sum_{j=1}^{N}w_0+\sum_{j=1}^{N}w_1x_1^{j*}+\cdots+\sum_{j=1}^{N}w_ix_i^{j*}-\sum_{j=1}^{N}y^{j*}\\ &=w_0\boldsymbol{e}^T\boldsymbol{e}+w_1\boldsymbol{e}^T\boldsymbol{x_1^*}+\cdots+w_i\boldsymbol{e}^T\boldsymbol{x_i^*}-\boldsymbol{e}^T\boldsymbol{y^*}\\ &=\boldsymbol{e}^T(w_0\boldsymbol{e}+w_1\boldsymbol{x_1^*}+\cdots+w_i\boldsymbol{x_i^*}-\boldsymbol{y^*})\\ &=\boldsymbol{e}^T([\boldsymbol{e},\boldsymbol{x_1^*},\cdots,\boldsymbol{x_i^*}]\boldsymbol{w}-\boldsymbol{y^*})\\ &=\boldsymbol{e}^T(A\boldsymbol{w}-\boldsymbol{y^*}) \end{aligned} w0L=j=1N(w0+w1x1j++wixijyj)1=j=1Nw0+j=1Nw1x1j++j=1Nwixijj=1Nyj=w0eThat's it+w1eTx1++wieTxieTy=eT(w0e+w1x1++wixiy)=eT([e,x1,,xi]wy)=eT (Awy)
∂ L ∂ w 1 = ∑ j = 1 N ( w 0 + w 1 x 1 j ∗ + ⋯ + w i x i j ∗ − y j ∗ ) x 1 j ∗ = ∑ j = 1 N w 0 x 1 j ∗ + ∑ j = 1 N w 1 x 1 j ∗ x 1 j ∗ + ⋯ + ∑ j = 1 N w i x i j ∗ x 1 j ∗ − ∑ j = 1 N y j ∗ x 1 j ∗ = w 0 x 1 ∗ T e + w 1 x 1 ∗ T x 1 ∗ + ⋯ + w i x 1 ∗ T x i ∗ − x 1 ∗ T y ∗ = x 1 ∗ T ( w 0 e + w 1 x 1 ∗ + ⋯ + w i x i ∗ − y ∗ ) = x 1 ∗ T ( [ e , x 1 ∗ , ⋯   , x i ∗ ] w − y ∗ ) = x 1 ∗ T ( A w − y ∗ ) \begin{aligned} \frac{\partial{L}}{\partial{w_1}}&=\sum_{j=1}^{N}(w_0+w_1x_1^{j*}+\cdots+w_ix_i^{j*}-y^{j*})x_1^{j*}\\ &=\sum_{j=1}^{N}w_0x_1^{j*}+\sum_{j=1}^{N}w_1x_1^{j*}x_1^{j*}+\cdots+\sum_{j=1}^{N}w_ix_i^{j*}x_1^{j*}-\sum_{j=1}^{N}y^{j*}x_1^{j*}\\ &=w_0\boldsymbol{x_1^{*T}}\boldsymbol{e}+w_1\boldsymbol{x_1}^{*T}\boldsymbol{x_1^*}+\cdots+w_i\boldsymbol{x_1}^{*T}\boldsymbol{x_i^*}-\boldsymbol{x_1}^{*T}\boldsymbol{y^*}\\ &=\boldsymbol{x_1}^{*T}(w_0\boldsymbol{e}+w_1\boldsymbol{x_1^*}+\cdots+w_i\boldsymbol{x_i^*}-\boldsymbol{y^*})\\ &=\boldsymbol{x_1}^{*T}([\boldsymbol{e},\boldsymbol{x_1^*},\cdots,\boldsymbol{x_i^*}]\boldsymbol{w}-\boldsymbol{y^*})\\ &=\boldsymbol{x_1}^{*T}(A\boldsymbol{w}-\boldsymbol{y^*}) \end{aligned} w1L=j=1N(w0+w1x1j++wixijyj)x1j=j=1Nw0x1j+j=1Nw1x1jx1j++j=1Nwixijx1jj=1Nyjx1j=w0x1Te+w1x1Tx1++wix1Txix1Ty=x1T(w0e+w1x1++wixiy)=x1T([e,x1,,xi]wy)=x1T(Awy)
The same method can be used to find the partial derivatives of other components
∂ L ∂ wi = xi ∗ T ( A w − y ∗ ) \frac{\partial{L}}{\partial{w_i}}=\boldsymbol{x_i}^{ *T}(A\boldsymbol{w}-\boldsymbol{y^*})wiL=xiT(Awy)
其中 A = [ e , x 1 ∗ , ⋯   , x i ∗ ] A=[\boldsymbol{e},\boldsymbol{x_1^*},\cdots,\boldsymbol{x_i^*}] A=[e,x1,,xi] For the x-value matrix of each batch, a column of all 1s is added to the first column. The loss function is aboutw \boldsymbol{w}w的梯度为:
∇ L = { ∂ L ∂ w 0 , ⋯   , ∂ L ∂ w i } T = [ e T , x 1 ∗ T , ⋯   , x i ∗ T ] T ( A w − y ∗ ) = A T ( A w − y ∗ ) \begin{aligned} \nabla L&=\{\frac{\partial{L}}{\partial{w_0}},\cdots,\frac{\partial{L}}{\partial{w_i}}\}^T\\ &=[\boldsymbol{e}^T,\boldsymbol{x_1^*}^T,\cdots,\boldsymbol{x_i^*}^T]^T(A\boldsymbol{w}-\boldsymbol{y^*})\\ &=A^T(A\boldsymbol{w}-\boldsymbol{y^*}) \end{aligned} L={ w0L,,wiL}T=[eT,x1T,,xiT]T (Awy)=AT (Awy)
Set the step size step, and the parameter update method is as follows:
wnew = w − step ∗ ∇ L \boldsymbol{w}_{new}=\boldsymbol{w}-step*\nabla Lwnew=wstepL
Using matrix and vector forms can easily implement multiple linear regression with numpy:

x = np.array([0.1,1.2,2.1,3.8,4.1,5.4,6.2,7.1,8.2,9.3,10.4,11.2,12.3,13.8,14.9,15.5,16.2,17.1,18.5,19.2,0.1,1.2,2.1,3.8,4.1,5.4,6.2,7.1,8.2,9.3,10.4,11.2,12.3,13.8,14.9,15.5,16.2,17.1,18.5,19.2])
y = np.array([5.7,8.8,10.8,11.4,13.1,16.6,17.3,19.4,21.8,23.1,25.1,29.2,29.9,31.8,32.3,36.5,39.1,38.4,44.2,43.4])
x = x.reshape(2,int(len(x)/2)).T
x = np.insert(arr=x,values=[1],obj=0,axis=1)
y = y.reshape(1,len(y)).T

The regression process is as follows:

# 设定步长
step=0.001
# 存储每轮损失的loss数组
loss_list=[]
# 定义epoch
epoch=500
# 定义batch_size
batch_size=12
# 定义单位列向量e
e=np.ones(batch_size).reshape(batch_size,1)
# 定义参数w和b并初始化
w=np.zeros(3).reshape(3,1)
#梯度下降回归
for i in range(epoch) :
    #计算当前输入x和标签y的索引,由于x和y数组长度一致,因此通过i整除x的长度即可获得当前索引
    index = i % int(len(x)/batch_size)
    # 当前轮次的x列向量值为:
    cx=x[index*batch_size:(index+1)*batch_size]
    # 当前轮次的y列向量值为:
    cy=y[index*batch_size:(index+1)*batch_size]
    # 计算当前loss
    loss_list.append(float(1/2*(cx.dot(w)-cy).T.dot(cx.dot(w)-cy)))
    # 计算参数w的梯度
    grad_w = cx.T.dot(cx.dot(w)-cy)
    # 更新w的值
    w -= step*grad_w
print(loss_list)
plt.plot(loss_list)
plt.show()
print(w)

Guess you like

Origin blog.csdn.net/zhuzheqing/article/details/129368548