Abstract

The model validation is assessing how the results of a statistical analysis will generalize to an independent data set. There are several techniques for model validation. One familiar and famous method for model validation is cross-validation which is called via other names such as out-of-sample testing and rotation estimation that is discussed in this chapter. This technique is mainly applied in settings where the output is predicted, and determining the accuracy of the predictive model in practice. There are two main subsets of cross-validation in machine learning: exhaustive and nonexhaustive. The exhaustive cross-validation method learns and tests all possible ways to divide the original sample into a training and a validation set which is included Leave-P-Out- and Leave-one-out Cross-Validation. Nonexhaustive is included on Hold-out and K-fold. The cross-validation branch is too extensive, we just deep into the most familiar one, K-fold cross-validation. Finally, we solved two extensive problems about K-fold cross-validation by using the deep learning technique.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call