Abstract
Extreme learning machine (ELM), a randomized learning paradigm for single hidden layer feed-forward network, has gained significant attention for solving problems in diverse domains due to its faster learning ability. The output weights in ELM are determined by an analytic procedure, while the input weights and biases are randomly generated and fixed during the training phase. The learning performance of ELM is highly sensitive to many factors such as the number of nodes in the hidden layer, the initialization of input weight and the type of activation functions in the hidden layer. Although various works on ELM have been proposed in the last decade, the effect of the all these influencing factors on classification performance has not been fully investigated yet. In this paper, we test the performance of ELM with different configurations through an empirical evaluation on three standard handwritten character datasets, namely, MNIST, ISI-Kolkata Bangla numeral, ISI-Kolkata Odia numeral and a newly developed NIT-RKL Bangla numeral dataset. Finally, we derive some best ELM figurations which can serve as general guidelines to design ELM based classifiers.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.