Abstract
Extreme learning machine (ELM) is a single layer feed-forward neural network with advantages of fast training and good generalization properties. However, when the size of the hidden layer is increased, both of these advantages are lost as the redundant information may cause overfitting. Traditional way to deal with the issue is to introduce regularisation which promote sparsity but in the output layer weight matrix. In this Letter, we proposed the use of sparsity inside the output of the hidden layer such as to use it as the only non-linearity in the hidden layer. In the proposed formulation, we use linear activation function inside the hidden layer and keep k highest activity-bearing neurons as a measure of sparsity. Using the principal component analysis, we project the resulting output layer matrix onto a low-dimensional space in order to further remove redundant and irrelevant information, and speed up the training process. In order to verify the feasibility and effectiveness of the proposed method, we test and compare it with a number of ELM variants using benchmark datasets. Compared with these methods, our results demonstrate that the proposed method achieves better accuracy performance consistently across many different benchmark datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.