Wu Enda's Machine Learning Notes--Week 3-4. Solve the problem of overfitting

week3-4.Solving the Problem of Overfitting


一、The Problem of Overfitting

underfitting=high bias;overfitting=high variance。

Ways to avoid overfitting:


二、Cost Function

A regularization term (penalty function) is added to each parameter theta in the cost function J, thereby making all parameters smaller.
But no regularization term is added to theta0.

If the coefficient lambda (also known as the regularization parameter) in the regularization term is too large, all parameters will become very small, and underfitting will occur.

三、Regularized Linear Regression

After the gradient descent formula is rewritten, it becomes the form of the last line, where 1-alpha*lambda/m compresses theta_j in the direction of 0 (for example, theta_j becomes 0.99theta_j). The term after the minus sign is the same as the original gradient descent formula, which is equivalent to the gradient descent calculation for 0.99theta_j.
Apply the regularization term to the normal equation:


四、Regularizedd Logistic Regression


Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324598715&siteId=291194637