Abstract

In feature selection research, simultaneous multi-class feature selection technologies are popular because they simultaneously select informative features for all classes. Recursive feature elimination (RFE) methods are state-of-the-art binary feature selection algorithms. However, extending existing RFE algorithms to multi-class tasks may increase the computational cost and lead to performance degradation. With this motivation, we introduce a unified multi-class feature selection (UFS) framework for randomization-based neural networks to address these challenges. First, we propose a new multi-class feature ranking criterion using the output weights of neural networks. The heuristic underlying this criterion is that "the importance of a feature should be related to the magnitude of the output weights of a neural network". Subsequently, the UFS framework utilizes the original features to construct a training model based on a randomization-based neural network, ranks these features by the criterion of the norm of the output weights, and recursively removes a feature with the lowest ranking score. Extensive experiments on 15 real-world datasets suggest that our proposed framework outperforms state-of-the-art algorithms. The code of UFS is available at https://github.com/SVMrelated/UFS.git.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.