困惑我好久的Multivariate Linear Regression 求导问题

Suppose we have ( x ( i ) , y ( i ) \vec{x}^{(i)}, y^{(i)} ) with sample size N N , where x ( i ) R D \vec{x}^{(i)} \in \mathbb{R}^D .
y ^ = j = 1 D β j x j \hat{y} =\sum_{j=1}^D \beta_jx_j
L ( a , b ) = 1 2 ( a b ) 2 \mathcal{L}(a, b)=\frac{1}{2}(a-b)^2
ε ( β 0 , β 1 , . . . , β D ) = 1 N i = 1 N L ( y ^ ( i ) , y ( i ) ) = 1 2 N i = 1 N ( y ^ ( i ) , y ( i ) ) 2 = 1 2 N i = 1 N ( j = 1 D β j x j ( i ) y ( i ) ) 2 \begin{aligned} \varepsilon(\beta_0, \beta_1,..., \beta_{D}) &= \frac{1}{N}\sum_{i=1}^{N}\mathcal{L}(\hat{y}^{(i)}, y^{(i)}) \\ &= \frac{1}{2N} \sum_{i=1}^{N}(\hat{y}^{(i)}, y^{(i)})^2 \\ &=\frac{1}{2N} \sum_{i=1}^N (\sum_{j=1}^D \beta_j x_j^{(i)} -y^{(i)})^2 \end{aligned}

Take Derivative with respect to w j w_j :
ε β j = 1 N i = 1 N x j ( i ) ( y ^ ( i ) y ( i ) ) = 1 N i = 1 N x j ( i ) ( j = 1 D β j x j ( i ) y ( i ) ) ( ) = 1 N j = 1 D ( i = 1 N x j ( i ) x j ( i ) ) β j 1 N i = 1 N x j ( i ) y ( i ) \begin{aligned} \frac{\partial \varepsilon}{\partial \beta_j} &= \frac{1}{N} \sum_{i=1}^N x_j^{(i)}(\hat{y}^{(i)} -y^{(i)}) \\ &=\frac{1}{N} \sum_{i=1}^N x_j^{(i)}(\sum_{j'=1}^D \beta_{j'}x_{j'}^{(i)} -y^{(i)})(这部分注意:你就是这里不明白) \\ &= \frac{1}{N} \sum_{j'=1}^D (\sum_{i=1}^{N} x_j^{(i)} x_{j'}^{(i)})\beta_{j'} - \frac{1}{N}\sum_{i=1}^Nx_j^{(i)}y^{(i)} \end{aligned}
Let A j j = 1 N i = 1 N x j ( i ) x j ( i ) R D A_{jj'}=\frac{1}{N} \sum_{i=1}^N x_j^{(i)}x_{j'}^{(i)} \in \mathbb{R}^D and c j = 1 N i = 1 N x j ( i ) y ( i ) R D c_j = \frac{1}{N}\sum_{i=1}^N x_j^{(i)}y^{(i)} \in \mathbb{R}^D . Then:
ε β j = 1 N j = 1 D ( i = 1 N x j ( i ) x j ( i ) ) β j 1 N i = 1 N x j ( i ) y ( i ) = 1 N j = 1 D A j j β j c j = s e t 0 \begin{aligned} \frac{\partial \varepsilon}{\partial \beta_j} &= \frac{1}{N} \sum_{j'=1}^D (\sum_{i=1}^{N} x_j^{(i)} x_{j'}^{(i)})\beta_{j'} - \frac{1}{N}\sum_{i=1}^Nx_j^{(i)}y^{(i)} \\ &=\frac{1}{N}\sum_{j'=1}^D A_{jj'}\beta_{j'} -c_j \stackrel{set}{=}0 \end{aligned}
Let X R N × D X \in \mathbb{R}^{N \times D} , A = 1 N X T X A=\frac{1}{N}X^TX and c = 1 N X T y c = \frac{1}{N} X^Ty
(3) X = [ x ( 1 ) T x ( 2 ) T . . x ( n ) T ] X= \left[ \begin{matrix} x^{(1)T} \\ x^{(2)^T}\\ .\\ .\\ x^{(n)^T} \end{matrix} \right] \tag{3}
ε β j = 1 N j = 1 D ( i = 1 N x j ( i ) x j ( i ) ) β j 1 N i = 1 N x j ( i ) y ( i ) = 1 N j = 1 D A j j β j c j = A β c = s e t 0 \begin{aligned} \frac{\partial \varepsilon}{\partial \beta_j} &= \frac{1}{N} \sum_{j'=1}^D (\sum_{i=1}^{N} x_j^{(i)} x_{j'}^{(i)})\beta_{j'} - \frac{1}{N}\sum_{i=1}^Nx_j^{(i)}y^{(i)} \\ &=\frac{1}{N}\sum_{j'=1}^D A_{jj'}\beta_{j'} -c_j \\ &=A\beta-c \stackrel{set}{=} 0 \end{aligned}
β ^ = A 1 c = ( X T X ) 1 X T t \hat{\beta} = A^{-1}c = (X^TX)^{-1}X^Tt
终于解决了!

一种更简单的方法是直接在risk做变换:
ε ( β 0 , β 1 , . . . , β D ) = 1 2 N i = 1 N ( j = 1 D β j x j ( i ) y ( i ) ) 2 = 1 2 N [ X β y ] T [ X β y ] \begin{aligned} \varepsilon(\beta_0, \beta_1,..., \beta_{D}) &=\frac{1}{2N} \sum_{i=1}^N (\sum_{j=1}^D \beta_j x_j^{(i)} -y^{(i)})^2 \\ &= \frac{1}{2N}[X\beta-y]^T [X\beta-y] \end{aligned}
Finally, the MLE estimate is β ^ = ( X T X ) 1 X T y \hat{\beta} = (X^TX)^{-1}X^Ty

This is only a estimate from one single training data, but we really want to get the true error or prediction error, which can be defined as:
ε t r u e ( β 0 , β 1 , . . . , β D ) = 1 2 E ( j = 1 D β j x j y ) 2 = 1 2 x ( j = 1 D β j x j y ) 2 p ( x ) d x \begin{aligned} \varepsilon_{true}(\beta_0, \beta_1,..., \beta_{D}) &=\frac{1}{2} E (\sum_{j=1}^D \beta_j \mathbf{x}_j - \mathbf{y})^2 \\ &= \frac{1}{2} \int_\mathbf{x}(\sum_{j=1}^D \beta_j \mathbf{x}_j - \mathbf{y})^2 p(\mathbf{x})d\mathbf{x} \end{aligned}
If want to read more about bias-variance in linear regression model, read the following:
https://courses.cs.washington.edu/courses/cse546/12wi/slides/cse546wi12LinearRegression.pdf

猜你喜欢

转载自blog.csdn.net/weixin_32334291/article/details/88735142