Abstract

Recently, multilayer extreme learning machine (ELM) algorithms have been extensively studied for hierarchical abstract representation learning in the ELM community. In this paper, we investigate the specific combination of $$L_{21}$$ -norm based loss function and regularization to improve the robustness and the sparsity of multilayer ELM. As we all known, the mean square error (MSE) cost function (or squared $$L_{2}$$ -norm cost function) is commonly used as optimization cost function for ELM, but it is sensitive to outliers and impulsive noises that are pervasive in real-world data. Our $$L_{21}$$ -norm loss function can lessen the harmful influence caused by noises and outliers and enhance robustness and stability of the learned model. Additionally, the row sparse inducing $$L_{21}$$ -norm regularization can learn the most-relevant sparse representation and reduce the intrinsic complexity of the learning model. We propose a specific combination of $$L_{21}$$ -norm loss function and regularization ELM auto-encoder (LR21-ELM-AE), and then stack LR21-ELM-AE hierarchically to construct the hierarchical extreme learning machine (H-LR21-ELM). Experiments conducted on several well-known benchmark datasets are presented, the results show that the proposed H-LR21-ELM can generate a more robust, more discriminative and sparser model compared with the other state-of-the-art multilayer ELM algorithms.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.