Abstract

Extreme Learning Machine (ELM) has drawn overwhelming attention from various fields notably in neural network researches for being an efficient algorithm. Using random computational hidden neurons, ELM shows faster learning speed over the traditional learning algorithms. Furthermore, it is stated that many types of hidden neurons which may not be neuron alike can be used in ELM as long as they are piecewise nonlinear. In this paper, we proposed a Constrained-Optimization-based ELM network structure implementing Bayesian framework in its hidden layer for learning and inference in a general form (denoted as C-BPP-ELM). Several benchmark data sets have been used to empirically evaluate the performance of the proposed model in pattern classification. The achieved results demonstrate that C-BPP-ELM outperforms the conventional ELM and the Constrained-Optimization-based ELM, and this in turn has validated the capability of ELM for being able to operate in a wide range of activation functions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call