sklearn-GridSearchCV 网格搜索 调参数

Grid Search 网格搜索

GridSearchCV:一种调参的方法,当你算法模型效果不是很好时,可以通过该方法来调整参数,通过循环遍历,尝试每一种参数组合,返回最好的得分值的参数组合
比如支持向量机中的参数 C 和 gamma ,当我们不知道哪个参数效果更好时,可以通过该方法来选择参数,我们把C 和gamma 的选择范围定位[0.001,0.01,0.1,1,10,100]
每个参数都能组合在一起,循环过程就像是在网格中遍历,所以叫网格搜索

c=0.001 c=0.01 c=0.1 c=1 c=10 c=100
gamma =0.001 SVC( gamma=0.001,C=0.001)
gamma =0.01 SVC( gamma=0.01,C=0.001)
gamma= 10 SVC( gamma=10,C=0.001)
gamma=100 SVC( gamma=100,C=0.001)

下面来通过具体代码看看怎么调优:

from sklearn.datasets import load_iris
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split 
iris = load_iris()
X_train,X_test,y_train,y_test = train_test_split(iris.data,iris.target,random_state=0)
print("训练集个数:%d  测试集个数:%d "%((len(X_train)),len(X_test)))
#开始进行网格搜索
best_score = 0
for gamma in [0.001,0.01,0.1,1,10,100]:
    for C in [0.001,0.01,0.1,1,10,100]:
        svm = SVC(gamma = gamma ,C = C)
        svm.fit(X_train,y_train)
        score = svm.score(X_test,y_test)
        if score > best_score:
            best_score = score
            best_parameters = {'gamma':gamma,'C':C}
print("best_score:{:.2f}".format(best_score))
print("best_parameters:{}".format(best_parameters))

输出:

训练集个数:112  验证集个数:38 
best_score:0.97
best_parameters:{'gamma': 0.001, 'C': 100}

存在的问题
原来的数据集分割为训练集和测试集之后,其中测试集起到的作用有两个,一个是用来调整参数,一个是用来评价模型的好坏,这样会导致评分值会比实际效果要好。(因为我们将测试集送到了模型里面去测试模型的好坏,而我们目的是要将训练模型应用在没使用过的数据上。)

解决方法:
我们可以通过把数据集划分三份,一份是训练集(训练数据),一份是验证集(调整参数),一份是测试集(测试模型)。

具体代码如下:

X_trainval,X_test,y_trainval,y_test = train_test_split(iris.data,iris.target)
X_train,X_val,y_train,y_val = train_test_split(X_trainval,y_trainval)
print("训练集个数:%d  验证集个数:%d  测试集个数:%d "%((len(X_train)),len(X_val),len(X_test)))
best_scroe = 0
for gamma in [0.001,0.01,0.1,1,10,100]:
    for C in [0.001,0.01,0.1,1,10,100]:
        svm = SVC(gamma=gamma,C=C)
        svm.fit(X_train,y_train)
        score = svm.score(X_val,y_val)
        if score > best_score:
            best_score = score
            best_parameters = {'gamma':gamma,'C':C}
svm = SVC(**best_parameters)
svm.fit(X_trainval,y_trainval)
test_score = svm.score(X_test,y_test)
print("best_score:{:.2f}".format(best_score))
print("best_parameters:{}".format(best_parameters))
print("best_score:{:.2f}".format(test_score))

输出:

训练集个数:84  验证集个数:28  测试集个数:38 
best_score:1.00
best_parameters:{'gamma': 0.001, 'C': 100}
best_score:0.95

进一步改进:
为了防止模型过拟合,我们使用交叉验证的方法

Grid Search with Cross Validation(GridSearchCV)

from sklearn.model_selection import cross_val_score
best_score = 0.0
for gamma in [0.001,0.01,0.1,1,10,100]:
    for C in [0.001,0.01,0.1,1,10,100]:
        svm = SVC(gamma=gamma,C=C)
        scores = cross_val_score(svm,X_trainval,y_trainval,cv=5)
        score = scores.mean()
        if score > best_score:
            best_score = score 
            best_parameters = {'gamma':gamma,'C':C}
svm = SVC(**best_parameters)
svm.fit(X_trainval,y_trainval)
test_score = svm.score(X_test,y_test)
print("best_score:{:.2f}".format(best_score))
print("best_parameters:{}".format(best_parameters))
print("best_score:{:.2f}".format(test_score))

输出:

best_score:0.97
best_parameters:{'gamma': 0.1, 'C': 1}
best_score:0.95

为了方便调参,sklearn 设置了一个类 GridSearchCV ,用来实现上面的fit,score等功能。

from sklearn.model_selection import GridSearchCV
#需要求的参数的范围(列表的形式)
param_grid = {"gamma":[0.001,0.01,0.1,1,10,100],
              "C":[0.001,0.01,0.1,1,10,100]}
#estimator模型 (将所求参数之外的确定的参数给出 )
estimator = SVC()
grid_search = GridSearchCV(estimator,param_grid,cv = 5)
X_train,X_test,y_train,y_test = train_test_split(iris.data,iris.target,random_state=10)
grid_search.fit(X_train,y_train)
print("Best set score:{:.2f}".format(grid_search.best_score_))
print("Best parameters:{}".format(grid_search.best_params_))
print("Test set score:{:.2f}".format(grid_search.score(X_test,y_test)))

输出

Best set score:0.98
Best parameters:{'gamma': 0.1, 'C': 10}
Test set score:0.97

总结

GridSearchCV能够使我们找到范围内最优的参数,param_grid参数越多,组合越多,计算的时间也需要越多,GridSearchCV使用于小数据集。

猜你喜欢

转载自blog.csdn.net/weixin_43172660/article/details/83032029
今日推荐