机器学习-P2 使用sklearn中提供的线性回归算法

1,数据

依旧波士顿

import numpy as np
from sklearn import datasets

boston = datasets.load_boston()

x = boston.data
y = boston.target

x = x[y<50]
y = y[y<50]

2,线性回归算法

from sklearn.linear_model import LinearRegression

lin_reg = LinearRegression()
lin_reg.fit(x_train,y_train)
>>>LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False)

信息

对应每个属性的队规系数

lin_reg.coef_
>>>array([-1.05508553e-01,  3.21306705e-02, -2.22057622e-02,  6.65447557e-01,
       -1.38799680e+01,  3.33985605e+00, -2.31747290e-02, -1.26679208e+00,
        2.31563372e-01, -1.30477958e-02, -8.50823070e-01,  6.00080341e-03,
       -3.89336930e-01])

截距(“1”那列对的回归系数)

lin_reg.intercept_
>>>36.92386748074081

跑分

lin_reg.score(x_test,y_test)
>>>0.8156582602978415

3,kNNRegressor中的线性回归算法

导入算法

from sklearn.neighbors import KNeighborsRegressor

knn_reg = KNeighborsRegressor()
knn_reg.fit(x_train,y_train)

跑分
这里会发现,分数很低是因为kNN算法有很多超参数,我们需要挑选出最佳的那组

knn_reg.score(x_test,y_test)
>>>0.6497688813332332

找出更优的参数

给出参数范围

from sklearn.model_selection import GridSearchCV

param_grid = [
    {
        "weights":["uniform"],
        "n_neighbors":[i for i in range(1,11)]
    },
    {
        "weights":["uniform"],
        "n_neighbors":[i for i in range(1,11)],
        "p":[i for i in range(1,6)]
    }
]

给出一个算法

knn_reg = KNeighborsRegressor()

挑选更优的参数

grid_search = GridSearchCV(knn_reg,param_grid,n_jobs=-1,verbose=1)
grid_search.fit(x_train,y_train)
>>>GridSearchCV(cv='warn', error_score='raise-deprecating',
             estimator=KNeighborsRegressor(algorithm='auto', leaf_size=30,
                                           metric='minkowski',
                                           metric_params=None, n_jobs=None,
                                           n_neighbors=5, p=2,
                                           weights='uniform'),
             iid='warn', n_jobs=-1,
             param_grid=[{'n_neighbors': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
                          'weights': ['uniform']},
                         {'n_neighbors': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
                          'p': [1, 2, 3, 4, 5], 'weights': ['uniform']}],
             pre_dispatch='2*n_jobs', refit=True, return_train_score=False,
             scoring=None, verbose=1)

看一下更优的参数都是什么

grid_search.best_params_
>>>{'n_neighbors': 4, 'p': 1, 'weights': 'uniform'}

然后再次进行跑分
会发现,分数还是很低,这是因为这里score的验证方法和原本线性回归中的验证方法不一样
(这里好像是“交叉验证”)

grid_search.best_score_
>>>0.5761489101577036

通过这个方法得出来的分才是和原本一样的验证方法,所以之后看不要只看score返回的数值,还要看一下所用的验证方法是否匹配

grid_search.best_estimator_.score(x_test,y_test)
>>>0.7343693507921156

4,更多关于线性模型的讨论

我们先训练出一个线性回归算法

lin_reg = LinearRegression()
lin_reg.fit(x,y)
>>>LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False)

可解释性

观察每一个回归系数会发现有正有负有大有小
这代表了,他们所对应的属性,与我们所得的结果是正相关还是负相关,以及他们的影响程度(大小)
(这应该就是回归算法的可解释性吧)

lin_reg.coef_
>>>array([-1.06715912e-01,  3.53133180e-02, -4.38830943e-02,  4.52209315e-01,
       -1.23981083e+01,  3.75945346e+00, -2.36790549e-02, -1.21096549e+00,
        2.51301879e-01, -1.37774382e-02, -8.38180086e-01,  7.85316354e-03,
       -3.50107918e-01])

找出最重要的属性

利用上述的特点,我们对相关系数(索引)进行排序
argsort()是将X中的元素从小到大排序后,提取对应的索引index,然后输出到y

np.argsort(lin_reg.coef_)
>>>array([ 4,  7, 10, 12,  0,  2,  6,  9, 11,  1,  8,  3,  5], dtype=int64)

从而我们可以找出,与结果相关性最大的那个属性
(这里也就是“RM”)

boston.feature_names[np.argsort(lin_reg.coef_)]
>>>array(['NOX', 'DIS', 'PTRATIO', 'LSTAT', 'CRIM', 'INDUS', 'AGE', 'TAX',
       'B', 'ZN', 'RAD', 'CHAS', 'RM'], dtype='<U7')

然后来看一下它所对应的解释

print(boston.DESCR)
>>>.. _boston_dataset:

Boston house prices dataset
---------------------------

**Data Set Characteristics:**  

    :Number of Instances: 506 

    :Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target.

    :Attribute Information (in order):
        - CRIM     per capita crime rate by town
        - ZN       proportion of residential land zoned for lots over 25,000 sq.ft.
        - INDUS    proportion of non-retail business acres per town
        - CHAS     Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
        - NOX      nitric oxides concentration (parts per 10 million)
        - RM       average number of rooms per dwelling
        - AGE      proportion of owner-occupied units built prior to 1940
        - DIS      weighted distances to five Boston employment centres
        - RAD      index of accessibility to radial highways
        - TAX      full-value property-tax rate per $10,000
        - PTRATIO  pupil-teacher ratio by town
        - B        1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
        - LSTAT    % lower status of the population
        - MEDV     Median value of owner-occupied homes in $1000's

    :Missing Attribute Values: None

    :Creator: Harrison, D. and Rubinfeld, D.L.

This is a copy of UCI ML housing dataset.
https://archive.ics.uci.edu/ml/machine-learning-databases/housing/


This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.

The Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic
prices and the demand for clean air', J. Environ. Economics & Management,
vol.5, 81-102, 1978.   Used in Belsley, Kuh & Welsch, 'Regression diagnostics
...', Wiley, 1980.   N.B. Various transformations are used in the table on
pages 244-261 of the latter.

The Boston house-price data has been used in many machine learning papers that address regression
problems.   
     
.. topic:: References

   - Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.
   - Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.

(可以找到 RM:每个住宅的平均房间数

5,使用梯度下降法训练的线性回归函数

只能解决线性模型

from sklearn.linear_model import SGDRegressor 

sgd_reg = SGDRegressor()

%time sgd_reg.fit(x_train_standard,y_train)

sgd_reg.score(x_test_standard,y_test)

>>>Wall time: 260 ms
	0.7938286715532883

更多有关梯度下降法:!!!点击这里!!!

发布了17 篇原创文章 · 获赞 4 · 访问量 517

猜你喜欢

转载自blog.csdn.net/weixin_46072771/article/details/104909152