Abstract

Random Forest is an excellent classification tool, especially in the –omics sciences such as metabolomics, where the number of variables is much greater than the number of subjects, i.e., “n p.” However, the choices for the arguments for the random forest implementation are very important. Simulation studies are performed to compare the effect of the input parameters on the predictive ability of the random forest. The number of variables sampled, m-try, has the largest impact on the true prediction error. It is often claimed that the out-of-bag error (OOB) is an unbiased estimate of the true prediction error. However, for the case where n p, with the default arguments, the out-of-bag (OOB) error overestimates the true error, i.e., the random forest actually performs better than indicated by the OOB error. This bias is greatly reduced by subsampling without replacement and choosing the same number of observations from each group. However, even after these adjustments, there is a low amount of bias. The remaining bias occurs because when there are trees with equal predictive ability, the one that performs better on the in-bag samples will perform worse on the out-of-bag samples. Cross-validation can be performed to reduce the remaining bias.

Highlights

  • Random forest [1] is an ensemble method based on aggregating predictions from a large number of decision trees

  • We see that the actual prediction error is similar for most of the combinations for each model, and the prediction error decreases with increasing sample sizes for Models 2 and 3

  • These results mirror those seen for the data sets with the group labels shuffled in [5], where the predictive ability of the variations of random forest are compared by using the OOB error rates for each

Read more

Summary

Introduction

Random forest [1] is an ensemble method based on aggregating predictions from a large number of decision trees. Breiman discusses the properties of random forest for the various input parameters in his seminal paper [1]. In this discussion, the number of samples was larger than the number of variables. These properties may differ when n p. It is often stated that the OOB error is an unbiased estimate of the true prediction error. We will show that this is not necessarily the case

Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call