Abstract

Recently, a simple and efficient learning steps referred to as extreme learning machine (ELM), was proposed by Huang et al. , which has shown that compared to some conventional methods, the training time of neural networks can be reduced even by thousands of times. However, recent study showed that some of random hidden nodes may paly a very minion role in the network output and thus eventually increase the network complexity. This paper proposes a parallel chaos search based incremental extreme learning machine (PC-ELM) with additional steps to obtain a more compact network architecture. At each learning step, optimal parameters of hidden node that are selected by parallel chaos optimization algorithm will be added to exist network in order to minimize the residual error between target function and network output. The optimization method is proposed parallel chaos optimization method. We prove the convergence of PC-ELM both in increased network architecture and fixed network architecture. Then we apply this approach to several regression and classification problems. Experiment of 19 benchmark testing data sets are used to test the performance of PC-ELM. Simulation results demonstrate that the proposed method provides better generalization performance and more compact network architecture.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.