Abstract

Extreme learning machine (ELM) is a learning method for training feedforward neural networks with random­ized hidden layer(s). It initializes the weights of hidden neurons in a random manner and determines the output weights in an analytic manner by making use of Moore-Penrose (MP) generalized inverse. No-Prop algorithm is recently proposed training algorithm for feedforward neural networks in which the weights of the hidden neurons are randomly assigned and fixed, and the output weights are trained using least mean square error (LMS) algorithm. The difference between ELM and No-Prop lies in the way the output weights are trained. While ELM optimizes the output weights in batch mode using MP generalized inverse, No-Prop uses LMS gradient algorithm to train the output weights iteratively. In this paper, a comparative analysis based on empirical studies regarding the stability and convergence performance of ELM and No-Prop algorithms for data classification is provided.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call