(In-depth article) Under-fitting and over-fitting explanation example-polynomial fitting

Examples of underfitting and overfitting explanation-polynomial fitting

table of Contents:

1. Import the necessary modules

2. Generate data

2.1 Build a data generation function
2.2 Generate training set
2.3 Generate test set

3. Polynomial model fitting

3.1 Building a polynomial feature model training function
3.2 First-order multilinear fitting
3.3 Third-order polynomial model fitting
3.4 Tenth-order polynomial model fitting
3.5 Thirty-order polynomial model fitting
3.6 Index comparison

4. Test set inspection

4.1 Polynomial feature model prediction function
4.2 First-order linear model prediction
4.3 Third-order polynomial model prediction
4.4 Tenth-order polynomial model prediction
4.5 Thirty-order polynomial model prediction
4.6 Comparison of indicators

5. Summary of underfitting and overfitting

Text content:

1. Import the necessary modules

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

2. Generate data

2.1 Build a data generation function

def data_generator(samples,random_seed=0):
	np.random.seed(random_seed)# 设置随机种子
	X=np.random.uniform(-5,5,size=samples)# 从-5到5中随机抽取100个实数
	y_real=0.5*X**3+X**2+2*X+1 #生成y的真实值
	err=np.random.normal(0,5,size=samples) #生成正态分布(均值为0,标准差为5的误差值)
	y=y_real+err #y真实值加上误差值,得到样本的y值
	return X,y,y_real

2.2 Generate training set

X_train,y_train,y_train_real=data_generator(
	samples=100,random_seed=34)
# 画出训练集数据的散点图
plt.scatter(
	X_train,y_train,maker='o',
	color='g',label='train dataset')
# 画出实际函数曲线
plt.plot(np.sort(X_train),y_train_real[np.argsort(X_train)],color='b',label='real_curve')
plt.legend()
plt.xlabel('x')
plt.ylabel('y')
plt.show()

Scatter plot of training set data and actual curve

2.3 Generate test set

X_test,y_test,y_test_real=data_generator(
	samples=100,random_seed=12)
# 画出测试集数据的散点图
plt.scatter(
	X_test,y_test,color='c',label='test dataset')
plt.legend()
plt.xlabel('x')
plt.ylabel('y')
plt.show()

Scatter plot of test set data

3. Polynomial model fitting

  • Create polynomial features sklearn.preprocessing.PolynomialFeatures
  • Parameters used:
    —degree: Set the order of the polynomial feature, the default is 2.
    —Include_bias: Whether to include bias items, the default is true.
  • Use fit_transform function to process the data.
  • Feature standardization sklearn.preprocessing.StandarScaler (subtract the mean and divide by the standard deviation)
  • Use fit_transform function to process the data.

3.1 Building a polynomial feature model training function

from sklearn.preprocessing import StandarScaler
from sklearn.preprocessind import PolynomialFeatures
from sklearn.linear_model import LinearRegression
def poly_fit_train(degree,X,y,y_real,model=None):
	# 如果degree不是整数就报错
	if not isinstance(degree,int):
		raise ValueError('degree should be interger.')
	# 如果degree不大于0就报错
	if degree <=0:
		raise Valuation('gegree should be greater than 	0.')
	# 为了满足model.fit函数的输入要求,将特征数据从一维变成二维,即从(samples,)变成(samples,1)
	X_2D=X.reshape(-1,1)
	# 如果degree大于1,生成多项式数据
	if degree >1:
		poly=PolynomialFeatures(degree,include_bias=False)
		X_2D=poly.fit_transform(X_2D)
		# 数据标准化(减均值除标准差)
		scaler=StandarScaler()
		X_2D=scaler.fit_transform(X_2D)
	if model==None:
		model=LinearRegression()
	#创建并训练线性回归模型
	model.fit(X_2D,y)
	# 模型预测
	y_pred=model.predict(X_2D)
	# 画出样本的散点图
	plt.scatter(X,y,marker='o',color='g',label='train dataset')
	# 画出实际函数曲线
	plt.plot(np.sort(X),y_real[np.argsort(X)],color='b',label='real curve')
	# 画出预测函数曲线
	plt.plot(np.sort(X),y_pred[np.argsort(X)],color='r',label='predict curve')
	plt.legend()
	plt.xlabel('x')
	plt.ylabel('y')
	plt.show()
	return y_pred,model

3.2 First-order linear model fitting

y_train_pred1,reg1=poly_fit_train(degree=1,X=X_train,y=y_train,y_real=y_train_real)

First-order linear fit

3.3 Third-order polynomial model fitting

y_train_pred3,reg3=poly_fit_train(
	degree=3,X=X_train,y=y_train,y_real=y_train_real)

Third-order model fitting

3.4 Tenth-order polynomial model fitting

y_train_pred10,reg10=poly_fit_train(
	degree=10,X=X_train,y=y_train,y_real=y_train_real)

Tenth order model fitting

3.5 Thirty-order polynomial model fitting

y_train_pred30,reg30=poly_fit_train(
	degree=30,X=X_train,y=y_train,y_real=y_train_real)

Insert picture description here

3.6 Index comparison

from sklearn.metrics import mean_squared.error
# 计算MSE
mse_train1=mean_squared.error(y_train_pred1,y_train)
mse_train3=mean_squared.error(y_train_pred3,y_train)
mse_train10=mean_squared.error(y_train_pred10,y_train)
mse_train30=mean_squared.error(y_train_pred30,y_train)
# 打印结果
print('MSE:')
print('1 order polynomial:{:.2f}'.format(mse_train1))
print('3 order polynomial:{:.2f}'.format(mse_train3))
print('10 order polynomial:{:.2f}'.format(mse_train10))
print('30 order polynomial:{:.2f}'.format(mse_train30))
# 输出结果
MSE:
1 order polynomial:149.92
3 order polynomial:24.32
10 order polynomial:23.64
30 order polynomial:15.05
  • Indicator description:
  • The models of the training set MSE indicators from good to bad are: 30-order polynomial, 10-order polynomial, 3-order polynomial, and 1-order polynomial

4. Test set inspection

4.1 Polynomial feature model prediction function

def poly_fit_predict(degree,X,y,model):
	#如果degree不是整数就报错
	if not isinstance(degree,int):
		raise ValueError('degree should be interger.')
	#如果degree不大于0就报错
	if degree <=0:
	raise Valuation('degree should be greater than 0.')
	# 为了满足model.fit函数的输入要求,将特征函数数据从一维变成二维,即从(samples,)变为(samples,1)
	X_2D=X.reshape(-1,1)
	# 如果degree大于1,生成多项式数据
	if degree>1:
		poly=PolynomialFeatures(degree=degree,include_bias=False)
		X_2D=poly.fit_transform(X_2D)
	#数据标准化(减均值除标准差)
	scaler=StandarScaler()
	X_2D=scaler.fit_transform(X_2D)
	#模型预测
	y_pred=model.predict(X_2D)
	# 画出样本的散点图
	plt.scatter(X,y,maker='o',
	color='c',label='test dataset')
	# 画出预测函数曲线
	plt.plot(np.sort(X),y_pred[np.argsort(X)],
	color='r',label=str(degree)+'order fitting')
	plt.legend()
	plt.xlabel('x')
	plt.ylabel('y')
	plt.show()
	return y_pred

4.2 First-order linear model prediction

y_test_pred1=poly_fit_predict(
	degree=1,X=X_test,y=y_test,model=reg1)

First-order model prediction

4.3 Third-order linear model prediction

y_test_pred3=poly_fit_predict(
	degree=3,X=X_test,y=y_test,model=reg3)

Third-order model prediction

4.4 Tenth-order linear model prediction

y_test_pred10=poly_fit_predict(
	degree=10,X=X_test,y=y_test,model=reg10)

Tenth order model prediction

4.5 Thirty-order linear model prediction

y_test_pred30=poly_fit_predict(
	degree=30,X=X_test,y=y_test,model=reg30)

Thirty-order model prediction

4.6 Comparison of indicators

# 计算MSE
mse_test1=mean_squared.error(y_test_pred1,y_test)
mse_test3=mean_squared.error(y_test_pred3,y_test)
mse_test10=mean_squared.error(y_test_pred10,y_test)
mse_test30=mean_squared.error(y_test_pred30,y_test)
# 打印结果
print('MSE:')
print('1 order polynomial:{:.2f}'.format(mse_test1))
print('3 order polynomial:{:.2f}'.format(mse_test3))
print('10 order polynomial:{:.2f}'.format(mse_test10))
print('30 order polynomial:{:.2f}'.format(mse_test30))
# 输出结果
MSE:
1 order polynomial:659.95
3 order polynomial:39.71
10 order polynomial:41.00
30 order polynomial:85.45
  • The model of the test set MSE indicators from good to bad:
  • 3rd order polynomial, 10th order polynomial, 30th order polynomial, 1st order polynomial

5. Definition of underfitting and overfitting

  • Underfitting: The selected model is too simple, so that the model's predictions on the training set and unknown data are very poor.
  • Overfitting: The selected model is too complex to predict the training set very well, but predicts the unknown data poorly (poor generalization ability).

Guess you like

Origin blog.csdn.net/weixin_42961082/article/details/113808086