Abstract

Extreme learning machine (ELM) uses a non-iterative method to train single-hidden-layer feed-forward networks (SLFNs), which has been proven to be an efficient and effective learning model for both classification and regression. The main advantage of ELM lies in that the input weights as well as the hidden layer biases can be randomly generated, which contributes to the analytical solution of output weights. In this paper, we propose a discriminative manifold ELM (DMELM) by simultaneously considering the discriminative information and geometric structure of data; specifically, we exploit the discriminative information in the local neighborhood around each data point. To this end, a graph regularizer based on a newly designed graph Laplacian to characterize both properties is formulated and incorporated into the ELM objective. In DMELM, the output weights can also be obtained in analytical form. Extensive experiments are conducted on image and EEG signal classification to evaluate the effectiveness of DMELM. The results show that DMELM consistently achieves better performance than original ELM and yields promising results in comparison with several state-of-the-art algorithms, which suggests that both the discriminative as well as manifold information are beneficial to classification.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.