Machine Learning - Cross Validation: Python Dataset Partitioning

Two methods of model selection: regularization (typical method), cross-validation.

Here is an introduction to cross-validation and its python code implementation.

Cross-validation

If the given sample data is sufficient, a simple way to do model selection is to randomly split the dataset into 3 parts, a training set, a validation set, and a test set.

training set: train the model

Validation Set: Choice of Model

Test set: final evaluation of the model

Among the learned models of varying complexity, the model with the smallest prediction error on the validation set is selected. Since the validation set has enough data, it is also effective to use it for model selection. In many real-world applications where data is insufficient, cross-validation methods can be used.

Basic idea: Use data repeatedly, divide the given data into training set and test set, and repeat training, testing and model selection on this basis.

Simple cross validation:

The data is randomly divided into two parts, the training set and the test set. Generally, 70% of the data is the training set and 30% is the test set.

Code (divide training set, test set):

from sklearn.cross_validation import train_test_split
# data (all data) labels (all target values) X_train training set (all features) Y_train training set target values
X_train, X_test, Y_train, Y_test = train_test_split(data, labels, test_size=0.25, random_state=0) #here training set 75%: test set 25%

where random_state

         Source code explanation: int, RandomState instance or None, optional (default=None)

        int, RandomState instance or None, optional (default=None)If int, random_state is the seed used by the random number generator;
        If RandomState instance, random_state is the random number generator;
        If None, the random number generator is the RandomState instance used
        by `np.random`.

The general idea is: if you set a specific value, such as random_state=10 , the data after each division is the same, and the same is true for multiple runs. If it is set to None, that is, random_state=None , the divided data will be different each time, and the divided data will be different for each run.

Code (divide training set, validation set, test set) :

from sklearn import cross_validation

train_and_valid, test = cross_validation.train_test_split(data, test_size=0.3,random_state=0) # First divided into two parts: training and validation, test set
train, valid = cross_validation.train_test_split(data, test_size=0.5,random_state=0) # Then divide training and validation into: training set, validation set


      


Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324506281&siteId=291194637