Abstract

Feature selection (FS) is an important step in machine learning since it has been shown to improve prediction accuracy while suppressing the curse of dimensionality of high-dimensional data. Neural networks have experienced tremendous success in solving many nonlinear learning problems. Here, we propose a new neural-network-based FS approach that introduces two constraints, the satisfaction of which leads to a sparse FS layer. We performed extensive experiments on synthetic and real-world data to evaluate the performance of our proposed FS method. In the experiments, we focus on high-dimensional, low-sample-size data since they represent the main challenge for FS. The results confirm that the proposed FS method based on a sparse neural-network layer with normalizing constraints (SNeL-FS) is able to select the important features and yields superior performance compared to other conventional FS methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.