Abstract

It is popular to evaluate the performance of classification algorithms by k -fold cross validation. A reliable accuracy estimate will have a relatively small variance, and several studies therefore suggested to repeatedly perform k -fold cross validation. Most of them did not consider the correlation among the replications of k -fold cross validation, and hence the variance could be underestimated. The purpose of this study is to explore whether k -fold cross validation should be repeatedly performed for obtaining reliable accuracy estimates. The dependency relationships between the predictions of the same instance in two replications of k -fold cross validation are first analyzed for k -nearest neighbors with $k= 1$ k = 1 . Then, statistical methods are proposed to test the strength of the dependency level between the accuracy estimates resulting from two replications of k -fold cross validation. The experimental results on 20 data sets show that the accuracy estimates obtained from various replications of k -fold cross validation are generally highly correlated, and the correlation will be higher as the number of folds increases. The k -fold cross validation with a large number of folds and a small number of replications should be adopted for performance evaluation of classification algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.