Abstract
Random forest (RF) is an ensemble learning method, and it is considered a reference due to its excellent performance. Several improvements in RF have been published. A kind of improvement for the RF algorithm is based on the use of multivariate decision trees with local optimization process (oblique RF). Another type of improvement is to provide additional diversity for the univariate decision trees by means of the use of imprecise probabilities (random credal random forest, RCRF). The aim of this work is to compare experimentally these improvements of the RF algorithm. It is shown that the improvement in RF with the use of additional diversity and imprecise probabilities achieves better results than the use of RF with multivariate decision trees.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.