Abstract

The extreme learning machine (ELM), which was originally proposed for “generalized” single-hidden layer feedforward neural networks (SLFNs), provides efficient unified learning solutions for the applications of regression and classification. It presents competitive accuracy with superb efficiency in many applications. However, due to its single-layer architecture, feature selection using ELM may not be effective for natural signals. To address this issue, this paper proposes a new ELM-based multi-layer learning framework for dimension reduction. The novelties of this work are as follows: (1) Unlike the existing multi-layer ELM methods in which all hidden nodes are generated randomly, in this paper some hidden layers are calculated by replacement technologies. By doing so, more important information can be exploited for feature learning, which lead to a better generalization performance. (2) Unlike the existing multi-layer ELM methods which only work for sparse representation, the proposed method is designed for dimension reduction. Experimental results on several classification datsets show that, compared to other feature selection methods, the proposed method performs competitively or much better than other feature selection methods with fast learning speed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.