Abstract
This paper proposes using the Parallel Layer Perceptron (PLP) network, instead of the Single Layer Feedforward neural network (SLFN) in the Extreme Learning Machine (ELM) framework. Differently from the SLFNs which consider cascade layers, the PLP is designed to accomplish also parallel layers, being the SLFN its particular case. This paper explores a particular PLP configuration which considers a nonlinear layer in parallel with a linear layer. For n inputs and m nonlinear neurons, it provides (n+1)m linear parameters, while the SLFN would have only m linear parameters (one for each hidden neuron). Since the ELM is based on adjusting only the linear parameters using the least squares estimate (LSE), the PLP network provides more freedom for the proper adjustment. Results from 12 regression and 6 classification problems are presented considering the training and test errors, the linear vector norm and the system condition number. They point out that the PLP-ELM framework is more efficient than the SLFN-ELM approach.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have