Abstract
Multitask (multi-target or multi-output) learning (MTL) deals with simultaneous prediction of several outputs. MTL approaches rely on the optimization of a joint score function over the targets. However, defining a joint score in global models is problematic when the target scales are different. To address such problems, single target (i.e. local) learning strategies are commonly employed. Here we propose alternative tree-based learning strategies to handle the issue with target scaling in global models, and to identify the learning order for chaining operations in local models. In the first proposal, the problems with target scaling are resolved using alternative splitting strategies which consider the learning tasks in a multi-objective optimization framework. The second proposal deals with the problem of ordering in the chaining strategies. We introduce an alternative estimation strategy, minimum error chain policy, that gradually expands the input space using the estimations that approximate to true characteristics of outputs, namely out-of-bag estimations in tree-based ensemble framework. Our experiments on benchmark datasets illustrate the success of the proposed multitask extension of trees compared to the decision trees with de facto design especially for datasets with large number of targets. In line with that, minimum error chain policy improves the performance of the state-of-the-art chaining policies.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.