Abstract

The model validation is assessing how the results of a statistical analysis will generalize to an independent data set. There are several techniques for model validation. One familiar and famous method for model validation is cross-validation which is called via other names such as out-of-sample testing and rotation estimation that is discussed in this chapter. This technique is mainly applied in settings where the output is predicted, and determining the accuracy of the predictive model in practice. There are two main subsets of cross-validation in machine learning: exhaustive and nonexhaustive. The exhaustive cross-validation method learns and tests all possible ways to divide the original sample into a training and a validation set which is included Leave-P-Out- and Leave-one-out Cross-Validation. Nonexhaustive is included on Hold-out and K-fold. The cross-validation branch is too extensive, we just deep into the most familiar one, K-fold cross-validation. Finally, we solved two extensive problems about K-fold cross-validation by using the deep learning technique.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.