Depth hands-on science learning 3-5 Others

Since the model validation data set does not participate in training when the training data is not enough, set aside large amounts of verification data unaffordable. An improved method K K-fold cross-validation ( K K-fold Cross-Validation). In K K-fold cross validation, we divided the original training data set into K K not a coincidence sub-data sets, then we do K K model training and validation times. Every time we use a sub-set of data to validate the model and use other K - 1 K-1 sub-set of data used to train the model. In this K K times training and validation, each sub-set of data used to verify the models are different. Finally, we have this K K times the training error and validation error are averaged.

Guess you like

Origin www.cnblogs.com/ghdg/p/12430712.html