Article Directory
1. Introduction
When I read the paper before, I saw grid search for hyperparameter tuning. At first, I thought it should be a relatively tall thing, but in fact, it was a violent search later. If a model has a total of 3 parameters, namely A, B, and C, if A has 3 choices, B has 4 choices, and C has 5 choices, in order to select the best model, you need one parameter and one parameter Try it, from this 60 ( 3 ∗ 4 ∗ 5 3*4*53∗4∗5 ) Find one of the models that meets your needs.
Note: grid search is a way of tuning, and has nothing to do with the way of implementation.
You can either use a simple for loop or directly use the class sklearn.model_selection.GridSearchCV in sklearn .
I think this blog is very good: Necessary for tuning parameters – Grid Search grid search
The shortcomings of grid search are also obvious, and it is time-consuming. Therefore, in practical applications, it is necessary to reduce the number of tuning parameters and narrow the search range of each parameter .
2. Code implementation
The code provided by sklearn's official website is quite good. sklearn official website parameter tuning example
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.svm import SVC
print(__doc__)
# Loading the Digits dataset
digits = datasets.load_digits()
# To apply an classifier on this data, we need to flatten the image, to
# turn the data in a (samples, feature) matrix:
n_samples = len(digits.images)
X = digits.images.reshape((n_samples, -1))
y = digits.target
# Split the dataset in two equal parts
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.5, random_state=0)
# Set the parameters by cross-validation
tuned_parameters = [{
'kernel': ['rbf'], 'gamma': [1e-3, 1e-4],
'C': [1, 10, 100, 1000]},
{
'kernel': ['linear'], 'C': [1, 10, 100, 1000]}]
scores = ['precision', 'recall']
for score in scores:
print("# Tuning hyper-parameters for %s" % score)
print()
clf = GridSearchCV(
SVC(), tuned_parameters, scoring='%s_macro' % score
)
clf.fit(X_train, y_train)
print("Best parameters set found on development set:")
print()
print(clf.best_params_)
print()
print("Grid scores on development set:")
print()
means = clf.cv_results_['mean_test_score']
stds = clf.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r"
% (mean, std * 2, params))
print()
print("Detailed classification report:")
print()
print("The model is trained on the full development set.")
print("The scores are computed on the full evaluation set.")
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred))
print()
# Note the problem is too easy: the hyperparameter plateau is too flat and the
# output model is the same for precision and recall with ties in quality.
3. Others
The official website is in English. If you find it troublesome, you can take a look at this blog: [GridSearchCV, CV tuning hyperparameter usage] [K nearest neighbor classifier KNeighborsClassifier usage] [Cross-validation]