Abstract

Choosing the most suitable algorithm to perform a machine learning task for a new problem is a recurrent and complex task. In multi-target regression tasks, when problem transformation methods are applied, this choice is even harder. The reason is the need to simultaneously choose the problem transformation method and the base learning algorithm. This work investigates how to bridge the gap of method/base learner recommendation for problems with multiple outputs. In meta-learning experiments, we use a large number of multi-target regression datasets to investigate whether using meta-learning can provide good recommendations. To do this, we compared the meta-models induced by 3 different ML algorithms, including three variations for each of them, and selected 58 meta-features that we believe are relevant for extracting good dataset descriptions for the meta-learning process. In the experimental results, the meta-models outperformed the baselines (Majority and Random) by recommending the most suitable solution for multi-target regression (for the transformation method and base-learner) with high predictive performance, including real-world applications. The meta-features and the relation between the transformation method and base-learner provided important insights regarding the optimal problem transformation method. Furthermore, when comparing the application of algorithm adaptation and problem transformation methods, our meta-learning proposal was capable of statistically overcoming all competitors, which resulted in a predictive performance using the best choice per problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call