Simplest example of automatic selection of SVM parameters in SKLearn (using GridSearchCV)

We all know that SVM can achieve a good classification effect if the parameters are better adjusted, but SVM does have more parameters, such as the ones introduced here:

https://blog.csdn.net/xiaodongxiexie/article/details/70667101

Some friends also explained the parameter adjustment process in more detail:

https://blog.csdn.net/baidu_15113429/article/details/72673466

According to netizens, the kernel, C and gamma should be mainly adjusted in the process of SVM parameter adjustment. For SKLearn, we can use GridSearchCV to automatically search the parameters. Some netizens have introduced its use method in detail:

https://blog.csdn.net/cherdw/article/details/54970366

The documentation on its official website is here: http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html

However, the above blogs and tutorials are a bit complicated. I will give the simplest example here:

from sklearn import svm
from sklearn.model_selection import GridSearchCV

svr = svm.SVC()
parameters = {'kernel':('linear', 'rbf'), 'C':[1, 2, 4], 'gamma':[0.125, 0.25, 0.5 ,1, 2, 4]}
clf = GridSearchCV(svr, parameters, scoring='f1')
clf.fit(X, y)
print('The parameters of the best model are: ')
print(clf.best_params_)

About scoring this parameter, there are more introductions here:

http://scikit-learn.org/stable/modules/model_evaluation.html#scoring-parameter

I use f1 here, other parameter values ​​can refer to this table:

Scoring Function Comment
Classification    
‘accuracy’ metrics.accuracy_score  
‘average_precision’ metrics.average_precision_score  
'f1' metrics.f1_score for binary targets
‘f1_micro’ metrics.f1_score micro-averaged
‘f1_macro’ metrics.f1_score macro-averaged
‘f1_weighted’ metrics.f1_score weighted average
‘f1_samples’ metrics.f1_score by multilabel sample
‘neg_log_loss’ metrics.log_loss requires predict_proba support
‘precision’ etc. metrics.precision_score suffixes apply as with ‘f1’
‘recall’ etc. metrics.recall_score suffixes apply as with ‘f1’
‘roc_auc’ metrics.roc_auc_score  
Clustering    
‘adjusted_mutual_info_score’ metrics.adjusted_mutual_info_score  
‘adjusted_rand_score’ metrics.adjusted_rand_score  
‘completeness_score’ metrics.completeness_score  
‘fowlkes_mallows_score’ metrics.fowlkes_mallows_score  
‘homogeneity_score’ metrics.homogeneity_score  
‘mutual_info_score’ metrics.mutual_info_score  
‘normalized_mutual_info_score’ metrics.normalized_mutual_info_score  
‘v_measure_score’ metrics.v_measure_score  
Regression    
‘explained_variance’ metrics.explained_variance_score  
‘neg_mean_absolute_error’ metrics.mean_absolute_error  
‘neg_mean_squared_error’ metrics.mean_squared_error  
‘neg_mean_squared_log_error’ metrics.mean_squared_log_error  
‘neg_median_absolute_error’ metrics.median_absolute_error  
'r2' metrics.r2_score  

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324819003&siteId=291194637