Abstract

Proposes a multi-agent approach (MA) for genetic algorithms (GA) applied to the training of Beta basis function neural networks (BBFNN). This approach, called the multi-agent distributed genetic algorithm (MADGA) has two advantages. First, thanks to the GAs' efficiency, it allows us to design a suitable architecture for the Beta system. Second, it improves the GAs' convergence by reducing their temporal complexity thanks to distributed implementation of the MA system. Agents, which are managed dynamically, interact to provide an optimal solution in order to obtain the best neural network that is considered as a compromise between network performances and structures. For illustration and discussion, we used BBFNN training sets with two space dimensions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call