Abstract

This paper proposes a simple but effective method to improve the generalization performance of extreme learning machine (ELM), which is an extremely fast learning method for a single-hidden-layer feedforward neural network (SLFN). The proposed method adopts an online sequential learning technique to update the output weight matrix of a learned SLFN by using misclassified training samples. As the process of updating these weights could be iteratively performed, the proposed method is named iterative ELM (I-ELM). The proposed I-ELM was evaluated on three datasets, including MNIST, Small NORB, and CIFAR-10, and compared with the standard ELM. Experimental results indicate that by using only a few iterations, the proposed I-ELM could effectively improve the generalization performance of SLFNs.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.