Abstract

Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems.

Highlights

  • Artificial Neural Networks (ANNs) are system composed of neurons organized in input, output, and hidden layers

  • Once we determined the best configuration for each algorithm, we performed an exhaustive testing of 30 runs for each pattern classification problem

  • These experiments show the evolution of the fitness function during 5000 generations, the weighted recognition rate, and some examples of the architectures generated with the methodology

Read more

Summary

Introduction

Artificial Neural Networks (ANNs) are system composed of neurons organized in input, output, and hidden layers. An ANN is a powerful tool that has been applied in a broad range of problems such as pattern recognition, forecasting, and regression. The ANN continuously changes their synaptic values until the acquired knowledge is sufficient (until a specific number of iterations is reached or until a goal error value is achieved). When the learning process or the training stage has finished, it is mandatory to evaluate the generalization capabilities of the ANN using samples of the problem, different to those used during the training stage. It is expected that the ANN can classify with an acceptable accuracy the patterns from a particular problem during the training and testing stage

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call