Abstract

Feature selection wrapper methods are powerful mechanisms for reducing the complexity of prediction models while preserving and even improving their precision. Meta-heuristic methods, such as multi-objective evolutionary algorithms, are commonly used as search strategies in feature selection wrapper methods since they allow minimizing the cardinality of the attribute subset and simultaneously maximizing the predictive capacity of the model. However, in high-dimensional problems, multi-objective evolutionary algorithms for wrapper-type feature selection may require excessive computational time, sometimes impractical, especially when the learning algorithm has a high computational cost, such as deep learning. To address this drawback, in this paper we propose a multi-surrogate assisted multi-objective evolutionary algorithm for feature selection, specially designed to improve generalization error. The proposed method has been compared with conventional feature selection wrapper methods that use random forest, support vector machine and long short-term memory learning algorithms to evaluate subsets of attributes. The experiments have been carried out with regression and classification problems with time series data for air quality forecasting in the south-east of Spain and for indoor temperature forecasting in a domotic house. The results demonstrate the superiority of the proposed multi-surrogate assisted method over conventional wrapper methods using the same run times.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call