Abstract

Abstract Classification and feature selection are two of the most intertwined problems in machine learning. Decision trees (DTs) are straightforward models that address these problems offering also the advantage of explainability. However, solutions that are based on them are either tailored for the problem they solve or their performance is dependent on the split criterion used. A game-theoretic decision forest model is proposed to approach both issues. DTs in the forest use a splitting mechanism based on the Nash equilibrium concept. A feature importance measure is computed after each tree is built. The selection of features for the next trees is based on the information provided by this measure. To make predictions, training data is aggregated from all leaves that contain the data tested, and logistic regression is further used. Numerical experiments illustrate the efficiency of the approach. A real data example that studies country income groups and world development indicators using the proposed approach is presented.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.