Abstract

The single-hidden-layer neural networks (NN) has been widely used for complex system identification. However, the hidden neurons are often determined by trial-and-error method and the amount of neurons is usually large. This commonly leads to over-fitting problem and the training process is time consuming. In this paper, we propose a two-stage backward elimination (TSBE) method to obtain a parsimonious network with fewer hidden neurons but remains a good performance and saves training time. In the first stage, neural networks with a predetermined number of hidden neurons is trained based on stochastic gradient decent (SGD) algorithm with part of training data and Least absolute shrinkage and selection operator (Lasso) is applied for dropping redundant neurons leading to a simplified neural model. In the second stage, the remaining training data is used to update the parameters of the simplified neural model. A simulation example is used to validate and show that the novel approach gives a more compressed model and higher level of accuracy comparing with the recently proposed pruning-based method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.