Abstract

Predictive clustering trees (PCTs) are a well-established generalization of standard decision trees, which can be used to solve a variety of predictive modeling tasks, including structured output prediction. Combining them into ensembles of PCTs yields state-of-the-art performance. However, they scale poorly to problems with high-dimensional output spaces and cannot exploit sparsity in data. Both of these issues are typically highlighted in (hierarchical) multi-label classification tasks, where the output can consist of hundreds of labels (high dimensionality), among which only a few are relevant for each example (sparsity). Sparsity is also often encountered in the input space (molecular fingerprints, bag-of-words representations, etc.). In this paper, we propose oblique predictive clustering trees capable of addressing these limitations. We design and implement two methods for learning oblique splits that contain linear combinations of features in the tests, hence a split corresponds to an arbitrary hyperplane in the input space. The resulting oblique trees are efficient for high-dimensional data and are capable of exploiting sparse data. We experimentally evaluate the proposed methods on 60 benchmark datasets for 6 predictive modeling tasks. The results of the experiments show that oblique predictive clustering trees achieve performance on par with state-of-the-art methods and are orders of magnitude faster than standard predictive clustering trees. We also show that meaningful feature importance scores can be extracted from the models learned with the proposed methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call