Mathematical derivation of the least square method (linear regression machine learning)
- Yan Jiang Yi-/2019.08.04
For simple linear regression problem, i.e., data is only one basic characteristic data sets , to make the loss function (here means squared error between the predicted value and the true value) minimum, and to achieve optimum parameters a and B , this particular least squares method is called by the least square method to give optimal parameters a and b in the formula is calculated as follows:
For the above mathematical principles, to optimize the convex optimization principle all play a crucial role, the following derivation of the least square method a, b parameters, the following specific mathematical derivation:
The first step: First, b is evaluated guide:
Step two: Continue to be a derivation:
The calculation formula is obtained in the final least squares a and b are as follows: