Five Ways to Derive the Normal Equation

Five Ways to Derive the Normal Equation

There are at least five different approaches to derive the normal equation. This post aims to provide a living document of the normal equation as well as its interpretations.


Notations

RSS stands for Residual Sum Squared error, β denotes parameters in a form of column vector, X is an N×p matrix where each row is an input vector, p is the number of entries/features for each vector, and y denotes labels in a form of column vector. That is,

X=x1x2xN,β=β1β2βp,y=y1y2yN


Method 1. Vector Projection onto the Column Space

This is the most intuitive way to understand the normal equation. The optimization of linear regression is equivalent to finding the projection of vector y onto the column space of X . This projection can be understood as the following subspace shown below.

Xβ=x1x2xNβ1β2βp=β1x11+β1x21+β1xN1++βpx1p+βpx2p+βpxNp=β1x11x21xN1++βpx1px2pxNp

As the projection is denoted by yˆ=Xβ , the optimal configuration of β is when the error vector yXβ is orthogonal to the column space of X , that is

XT(yXβ)=0.(1)

Solving this gives:

β=(XTX)1XTy.

Here XTX is denoted as the Gram Matrix and XTy is denoted as a moment matrix. More intuitively, the Gram matrix captures the correlations among the features and the moment matrix captures the contributions from each feature to the regression outcome.


Method 2. Direct Matrix Differentiation

This is the most straightforward way to solve the equation by rewriting S(β) into a simpler form:

S(β)=(yXβ)T(yXβ)=yTyβTXTyyTXβ+βTXTXβ=yTy2βTXTy+βTXTXβ

Differentiate S(β) w.r.t. β :

2yTX+βT(XTX+(XTX)T)=2yTX+2βTXTX=0

Solving S(β) gives:

β=(XTX)1XTy


Method 3. Matrix Differentiation with Chain-rule

This is the simplest method for a lazy person, as it takes very little effort to reach the solution. The key is to apply the chain-rule:

S(β)β=(yXβ)T(yXβ)(yXβ)(yXβ)β=2(yXβ)TX=0

solving S(β) gives:

β=(XTX)1XTy

This method requires an understanding of matrix differentiation of the quadratic form:

xTWxx=xT(W+WT)


Method 4. Without Matrix Differentiation

We can rewrite S(β) as following:

S(β)=β,β2β,(XTX)1XTy+(XTX)1XTy,(XTX)1XTy+C

where , is the inner product defined by

x,y=xT(XTX)y.

The idea is to rewrite S(β) into the form of S(β)=(xa)2+b such that x can be solved exactly.


Method 5. Statistical Learning Theory

An alternative method to derive the normal equation arises from the statistical learning theory. The aim of this task is to minimize the expected prediction error given by:

EPE(β)=(yxTβ)Pr(dx,dy)

where x stands for a column vector of random variables, y denotes the target random variable, and β denotes a column vector of parameters (Note the definitions are different from the notations before).
Differentiating EPE(β) w.r.t. β gives:

EPE(β)β=2(yxTβ)(1)xTPr(dx,dy)

Before we proceed, let’s check the dimensions to make sure the partial derivative is correct. EPE is the expected error: a 1×1 vector. β is a column vector that is N×1 . According to the Jacobian in vector calculus, the resulting partial derivative should take the form

EPEβ=(EPEβ1,EPEβ2,,EPEβN)

which is a 1×N vector. Looking back at the right-hand side of the equation above, we find 2(yxTβ)(1) being a constant while xT being a row vector, resuling the same 1×N dimension. Thus, we conclude the above partial derivative is correct. This derivative mirrors the relationship between the expected error and the way to adjust parameters so as to reduce the error. To understand why, imagine 2(yxTβ)(1) being the errors incurred by the current parameter configurations β and xT being the values of the input attributes. The resulting derivative equals to the error times the scales of each input attribute. Another way to make this point is: the contribution of error from each parameter βi has a monotonic relationship with the error 2(yxTβ)(1) as well as the scalar xT that was multiplied to each βi .

Now, let’s go back to the derivation. Because 2(yxTβ)(1) is 1×1 , we can rewrite it with its transpose:

EPE(β)β=2(yxTβ)T(1)xTPr(dx,dy) .

Solving EPE(β)β=0 gives:

E[yTxTβTxxT]=0E[βTxxT]=E[yTxT]E[xxTβ]=E[xy]β=E[xxT]1E[xy].

References
[1] Wikipedia: Linear Least Squares
[2] The elements of statistical Learning

猜你喜欢

转载自blog.csdn.net/qq_29159273/article/details/77188041