Abstract
Extreme learning machine (ELM), a randomized learning paradigm for single hidden layer feed-forward network, has gained significant attention for solving problems in diverse domains due to its faster learning ability. The output weights in ELM are determined by an analytic procedure, while the input weights and biases are randomly generated and fixed during the training phase. The learning performance of ELM is highly sensitive to many factors such as the number of nodes in the hidden layer, the initialization of input weight and the type of activation functions in the hidden layer. Although various works on ELM have been proposed in the last decade, the effect of the all these influencing factors on classification performance has not been fully investigated yet. In this paper, we test the performance of ELM with different configurations through an empirical evaluation on three standard handwritten character datasets, namely, MNIST, ISI-Kolkata Bangla numeral, ISI-Kolkata Odia numeral and a newly developed NIT-RKL Bangla numeral dataset. Finally, we derive some best ELM figurations which can serve as general guidelines to design ELM based classifiers.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have