Abstract

We evaluate the effectiveness of cross-validation in selecting the right-size model for decision tree and k-nearest neighbor learning methods. For samples with at least 200 cases, extensive empirical evidence supports the following conclusions relative to complexity-fit selection: (a) 10-fold cross-validation is nearly unbiased; (b) ignoring model complexity-fit and picking the “standard” model is highly biased; (c) 10-fold cross-validation is consistent with optimal complexity-fit selection for large sample sizes and (d) the accuracy of complexity-fit selection by 10-fold cross-validation is largely dependent on sample size, irrespective of the population distribution.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.