Abstract

Feature selection, which aims to screen out redundant and irrelevant features from datasets, is integral to machine learning and data mining. Grey Wolf Optimization (GWO) is a recent meta-heuristic algorithm based on swarm intelligence and has wide applicability to various optimization problems due to its fast convergence and few parameters. However, since the wolf pack is always dominated by the three leading wolves (i.e., α, β and δ), the GWO algorithm suffers from weak exploration throughout the whole optimization process and easily stagnates into local optima. In this paper, an Adaptively Balanced Grey Wolf Optimization (ABGWO) algorithm is proposed to seek out the optimal feature subset for high-dimensional classification. Specifically, to improve the exploration ability of GWO, a random wolf is introduced to cooperate with α, β and δ. A novel level-based strategy is further adopted to select the random wolf. Besides, to dynamically modulate the exploration and exploitation ability in different optimization stages, an adaptive coefficient is introduced to regulate the leadership of α, β, δ and the randomly-selected wolf. Finally, the improvement of exploration and exploitation is validated on 12 high-dimensional datasets provided by Arizona State University and University of California Irvine, and the superiority of ABGWO is further verified by comparing it with seven state-of-the-art feature selection approaches on the aspect of classification accuracy, size of feature subset and computational time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call