One, linear regression api
(1) Optimize through normal equation
sklearn.linear_model.LinearRegression(fit_intercept=True)
- Optimized by normal equation
- parameter
- fit_intercept: whether to calculate the offset
- Attributes
- LinearRegression.coef_: regression coefficient
- LinearRegression.intercept_:偏置
(2) Optimized by gradient descent method
sklearn.linear_model.SGDRegressor(loss="squared_loss", fit_intercept=True, learning_rate ='invscaling', eta0=0.01)
- The SGDRegressor class implements stochastic gradient descent learning. It supports different loss functions and regularization penalty terms to fit linear regression models.
- parameter:
- loss: loss type
- loss=”squared_loss”: ordinary least squares method
- fit_intercept: whether to calculate the offset
- learning_rate : string, optional
- Learning rate filling
- 'constant': eta = eta0
- 'optimal': eta = 1.0 / (alpha * (t + t0)) default
- 'invscaling': eta = eta0 / pow(t, power_t)
- power_t=0.25: exists in the parent class
- For a constant learning rate, you can use learning_rate='constant' and use eta0 to specify the learning rate.
- loss: loss type
- Attributes:
- SGDRegressor.coef_: regression coefficient
- SGDRegressor.intercept_: Bias
2. Boston housing price prediction case
(1) Data content
(2) Analysis
Whether the data size in the regression is inconsistent, will it cause a greater impact on the results? So it needs to be standardized.
The whole process can be summarized into the following three parts:
- Data segmentation and standardization
- Regression prediction
- Linear regression algorithm effect evaluation
(3) Regression performance evaluation
Mean Squared Error (MSE) evaluation mechanism:
API:sklearn.metrics.mean_squared_error(y_true, y_pred)
- Mean square error regression loss
- y_true: true value
- y_pred: predicted value
- return: floating point result
"""
# 数据获取
# 数据基本处理
# 分割数据
# 特征工程
# 机器学习-线性回归
# 模型评估
"""
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LinearRegression,SGDRegressor
from sklearn.metrics import mean_squared_error
def liner_model_1():
"""
线性回归:正规方程
"""
# 数据获取
boston = load_boston()
# 分割数据
x_train, x_test, y_train,y_test = train_test_split(boston.data, boston.target, test_size=0.2)
# 特征工程
transfer = StandardScaler()
x_train = transfer.fit_transform(x_train)
x_test = transfer.transform(x_test)
# 机器学习-线性回归
estimator = LinearRegression()
estimator.fit(x_train, y_train)
print("模型的偏置是:", estimator.intercept_)
print("模型的系数是:", estimator.coef_)
# 模型评估
y_pred = estimator.predict(x_test)
print('预测值是:',y_pred)
# 均方误差
mse = mean_squared_error(y_test, y_pred)
print('均方误差为:',mse)
def liner_model_2():
"""
线性回归:梯度下降
"""
# 数据获取
boston = load_boston()
# 分割数据
x_train, x_test, y_train,y_test = train_test_split(boston.data, boston.target, test_size=0.2)
# 特征工程
transfer = StandardScaler()
x_train = transfer.fit_transform(x_train)
x_test = transfer.transform(x_test)
# 机器学习-线性回归
estimator = SGDRegressor(max_iter=2000,learning_rate="constant",eta0=0.0001)
estimator.fit(x_train, y_train)
print("模型的偏置是:", estimator.intercept_)
print("模型的系数是:", estimator.coef_)
# 模型评估
y_pred = estimator.predict(x_test)
print('预测值是:',y_pred)
# 均方误差
mse = mean_squared_error(y_test, y_pred)
print('均方误差为:',mse)
liner_model_1()
print("*************************************************************")
liner_model_2()