I wrote an article before
YOLOv8
to realize the process ofk
folding cross-validation . Many students want to try it in v5/v7, so the tutorial is here.
K-fold cross-validation
concept
K
Folded cross-validation (K-fold cross-validation)
is a technique commonly used to evaluate the performance of machine learning models.
It can make more full use of the limited data set, reduce the bias caused by improper data division, and improve the generalization ability of the model.
K
The basic idea of folded cross-validation is to divide the original data into K
subsets, called "folds" (Fold)
, of which K-1
subsets are used to train the model and the remaining 1
subsets are used to test the model.
This process is repeated K
several times, each time selecting a different 1
subset as the test set, and finally obtaining K
the evaluation results of the model performance, which are usually averaged to obtain the final evaluation index.
step
-
Dataset partitioning: The original dataset is randomly divided into
K
subsets, ensuring that each subset has a similar number of samples. Typically,K
the value of is5
or10
, but other values can be chosen in some cases. -
train and test