Abstract

Extreme learning machine (ELM) has gained increasing interests from various research fields recently. Researchers have proposed various extensions to improve its stability, sparsity, and generalization performance. In this paper, we propose a robust and sparse ELM to exploit $L_{21} $ -norm minimization of both loss function and regularization (LR21-ELM). Our $L_{21} $ -norm-based loss function can diminish the undue influence of noises and outliers of data points compared with the $L_{2} $ -norm based loss function and make the learned ELM model more robust and stable. The powerful structural sparse-inducing $L_{21} $ -norm regularization is integrated into the ELM objective function to eliminate the potential redundant neurons of ELM adaptively and reduce the complexity of the learning model. We introduce an effective iterative optimization algorithm to solve the $L_{21} $ -norm minimization problem. Empirical tests on a number of benchmark datasets indicate that our proposed algorithm can generate a more compact, robust, and discriminative model compared with the original ELM algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call