Abstract

Previous studies have shown that factorization and random regrouping significantly improve the performance of the cooperative particle swarm optimization (CPSO) algorithm. However, few studies have examined whether this trend continues when CPSO is applied to the training of feed forward neural networks. Neural network training problems often have very high dimensionality and introduce the issue of saturation, which has been shown to significantly affect the behavior of particles in the swarm; thus it should not be assumed that these trends hold. This study identifies the benefits of random regrouping and factorization to CPSO based neural network training, and proposes a number of approaches to problem decomposition for use in neural network training. Experiments are performed on 11 problems with sizes ranging from 35 up to 32,811 weights and biases, using a number of general approaches to problem decomposition, and state of the art algorithms taken from the literature. This study found that the impact of factorization and random regrouping on solution quality and swarm behavior depends heavily on the general approach to problem decomposition. It is shown that a random problem decomposition is effective in feed forward neural network training. A random problem decomposition has the benefit of reducing the issue of problem decomposition to the tuning of a single parameter.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.