Linear regression 2_Python handwritten gradient descent_201208

Supplement to the previous basic knowledge of linear regression:

Previous section:
https://blog.csdn.net/a18829292719/article/details/109449617

On the basis of a linear regression, we raises two important concepts, optimization loss function and loss function - gradient descent method , gradient descent more central France, many algorithms will be used, and also talked about the regularization of Method, as long as the optimization of the loss function is involved, regularization can be added, such as many deep learning models.
In actual use, linear regression is not used much because it is too simple. Generally, when we do regression problems, we can use linear regression to verify the data first, because sometimes the data we get is very messy. If the effect of linear regression fitting is good, we can use a higher-order model.

For handwriting gradient descent, see this link :
python handwriting gradient descent

Why L1 regularization is not commonly used :
1. It is easy to produce sparse solutions
. 2. The loss function has an absolute value function, which is a step function. There is no derivative at the step point. The entire function cannot be directly derived for optimization, and the operation is complicated Degree is higher.

Guess you like

Origin blog.csdn.net/a18829292719/article/details/110912955