Abstract
Label semantics is a random set based framework for “Computing with Words” that captures the idea of computation on linguistic terms rather than numerical quantities. Within this new framework, a decision tree learning model is proposed where nodes are linguistic descriptions of variables and leaves are sets of appropriate labels. In such decision trees, the probability estimates for branches across the whole tree is used for classification, instead of the majority class of the single branch into which the examples fall. By empirical experiments on real-world datasets it is verified that our algorithm has better or equivalent classification accuracy compared to three well known machine learning algorithms. By applying a new forward branch merging algorithm, the complexity of the tree can be greatly reduced without significant loss of accuracy. Finally, a linguistic interpretation of trees and classification with linguistic constraints are introduced.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.