Abstract

As a single-hidden-layer feedforward neural network, an extreme learning machine (ELM) randomizes the weights between the input layer and the hidden layer as well as the bias of hidden neurons, and analytically determines the weights between the hidden layer and the output layer using the least-squares method. This paper proposes a two-hidden-layer ELM (denoted TELM) by introducing a novel method for obtaining the parameters of the second hidden layer (connection weights between the first and second hidden layer and the bias of the second hidden layer), hence bringing the actual hidden layer output closer to the expected hidden layer output in the two-hidden-layer feedforward network. Simultaneously, the TELM method inherits the randomness of the ELM technique for the first hidden layer (connection weights between the input weights and the first hidden layer and the bias of the first hidden layer). Experiments on several regression problems and some popular classification datasets demonstrate that the proposed TELM can consistently outperform the original ELM, as well as some existing multilayer ELM variants, in terms of average accuracy and the number of hidden neurons.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.