Abstract

Feature selection has perennially stood as a pivotal concern in the realm of time-series forecasting due to its direct influence on the efficacy of predictive models. Conventional approaches to feature selection predominantly rely on domain knowledge and experiential insights and are, therefore, susceptible to individual subjectivity and the resultant inconsistencies in the outcomes. Particularly in domains such as financial markets, and within datasets comprising time-series information, an abundance of features adds complexity, necessitating adept handling of high-dimensional data. The computational expenses associated with traditional methodologies in managing such data dimensions, coupled with vulnerability to the curse of dimensionality, further compound the challenges at hand. In response to these challenges, this paper advocates for an innovative approach—a feature selection method grounded in ensemble learning. The paper explicitly delineates the formal integration of ensemble learning into feature selection, guided by the overarching principle of “good but different”. To operationalize this concept, five feature selection methods that are well suited to ensemble learning were identified, and their respective weights were determined through K-fold cross-validation when applied to specific datasets. This ensemble method amalgamates the outcomes of diverse feature selection techniques into a numeric composite, thereby mitigating potential biases inherent in traditional methods and elevating the precision and comprehensiveness of feature selection. Consequently, this method improves the performance of time-series prediction models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call