Abstract

Decision trees are widely disseminated as an effective solution for classification tasks. Decision tree induction algorithms have some limitations though, due to the typical strategy they implement: recursive top-down partitioning through a greedy split evaluation. This strategy is limiting in the sense that there is quality loss while the partitioning process occurs, creating statistically insignificant rules. In order to prevent the greedy strategy and to avoid converging to local optima, we present a novel Genetic Algorithm for decision tree induction based on a lexicographic multi-objective approach, and we compare it with the most well-known algorithm for decision tree induction, J48, over distinct public datasets. The results show the feasibility of using this technique as a means to avoid the previously described problems, reporting not only a comparable accuracy but also, importantly, a significantly simpler classification model in the employed datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.