[Actual] validation set of machine learning how to do better than the test results set?

Effect on model validation set (set of development) than in the test set Shang Hao, or that the effect is not as good as the test set validation set, this time how to do?

This can be understood as a model of over-fitting the validation set. Effect on model validation set does not represent the actual generalization ability of the model.

This time, you can do:
1) Check the validation and test sets are not the same distribution, validation set should be more like a test set instead of the training set.
2) change the validation set, or to increase the validation set.
3) check the code is not a problem, is not the validation set were taken from the training parameters.

The following cases, it may be normal:
1) test set is more difficult to predict than the validation set, although the algorithm has done good enough, but it is difficult to have further room for improvement.
2) when the validation set and test set small difference, such as about 1%, it may be normal.

References

《Machine Learning Yearning》机器学习训练秘籍 -- Andrew Ng
Validation and Testing accuracy widely different -- stackoverflow
test accuracy is so much lower than validation accuracy by 6~10%. What could be the reason? -- StackExchange

Guess you like

Origin www.cnblogs.com/wuliytTaotao/p/11698284.html