Abstract

Dataset scaling, a.k.a. normalization, is an essential preprocessing step in a machine learning (ML) pipeline. It aims to adjust the scale of attributes in a way that they all vary within the same range. This transformation is known to improve the performance of classification models. Still, there are several scaling techniques (STs) to choose from, and no ST is guaranteed to be the best for a dataset regardless of the classifier chosen. It is thus a problem-and classifier-dependent decision. Furthermore, there can be a huge difference in performance when selecting the wrong technique; hence, it should not be neglected. That said, the trial-and-error process of finding the most suitable technique for a particular dataset can be unfeasible. As an alternative, we propose the Meta-scaler, which uses meta-learning (MtL) to build meta-models to automatically select the best ST for a given dataset and classification algorithm. The meta-models learn to represent the relationship between meta-features extracted from the datasets and the performance of specific classification algorithms on these datasets when scaled with different techniques. Our experiments using 12 base classifiers, 300 datasets, and five STs demonstrate the feasibility and effectiveness of the approach. When using the ST selected by the Meta-scaler for each dataset, 10 of 12 base models tested achieved statistically significantly better classification performance than any fixed choice of a single ST. The Meta-scaler also outperforms state-of-the-art MtL approaches for ST selection. The source code, data, and results from the experiments in this article are available at a GitHub repository (http://github.com/amorimlb/meta_scaler).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.