Abstract
One of the major obstacles in using radial basis function (RBF) neural networks is the convergence toward local minima instead of the global minima. For this reason, an adaptive gradient multiobjective particle swarm optimization (AGMOPSO) algorithm is designed to optimize both the structure and parameters of RBF neural networks in this paper. First, the AGMOPSO algorithm, based on a multiobjective gradient method and a self-adaptive flight parameters mechanism, is developed to improve the computation performance. Second, the AGMOPSO-based self-organizing RBF neural network (AGMOPSO-SORBF) can optimize the parameters (centers, widths, and weights), as well as determine the network size. The goal of AGMOPSO-SORBF is to find a tradeoff between the accuracy and the complexity of RBF neural networks. Third, the convergence analysis of AGMOPSO-SORBF is detailed to ensure the prerequisite of any successful applications. Finally, the merits of our proposed approach are verified on multiple numerical examples. The results indicate that the proposed AGMOPSO-SORBF achieves much better generalization capability and compact network structure than some other existing methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.