Abstract

Label ranking studies the problem of learning a preference model that maps instances to rankings over a finite set of predefined class labels. The training data used to solve this problem consists of instances labeled with rankings. Since these rankings are often incomplete, models need to be able to deal with missing information in the class labels to be more useful in practice. Several decision tree models have been proposed to learn from incomplete rankings, mainly using axis-parallel decision nodes, which is the standard approach for decision tree induction. In contrast to this strategy, this present work introduces a method for learning oblique decision trees for the label ranking problem, as they have been shown to improve performance in the standard classification scenario. Our experimentation shows that this method offers several advantages over the current decision tree model. Not only does it generate more compact tree structures, but it is also shown to achieve outstandingly better results for complete rankings and in cases with a low percentage of missing labels. Moreover, the proposed method is faster in the largest datasets than the current decision tree model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call