Machine Learning - Least Squares Solving Linear Regression

Reference: "Machine Learning" Watermelon Book —— Zhou Zhihua

The following are personal notes, and there are inevitably many details that are wrong. for reference only!


1. Principle:        

        The mean square error MES has a very good geometric meaning, which corresponds to the commonly used Euclidean distance or "Euclidean distance" for short. The method of solving the model based on the minimization of the mean square error is called "least square" Multiplication" (least square method). In linear regression, the least squares method is to try to find a line that minimizes the sum of the Euclidean distances from all samples to the line.

2. Model analysis:

How to determine w and b? Obviously, the key is how to measure the difference between f(z) and y, the mean square error (2.2) is the most commonly used performance measure in regression tasks, so we can try to minimize the mean square error ,which is:

 

 Then make equations (3.5) and (3.6) zero to get the closed-form solution of the optimal solution of w and b:

 

3. Use the least squares method to solve linear regression

About the partial derivative process:

Derivation manuscript:

Notice:

1. The previous multiplication by one-half is to eliminate the influence of the error scoring term when the loss function is derived

2. , m is a constant


 The above is a reference to Zhou Zhihua's watermelon book

Guess you like

Origin blog.csdn.net/qq_21402983/article/details/123952862