Abstract

We propose two evolutionary neural network-training algorithms for Beta basis function neural networks (BBFNN). Classic training algorithms for neural networks start with a predetermined network structure, and so the quality of the response of the BBFNN depends strongly on its structure. Generally, the network resulting from learning applied to a predetermined architecture is either insufficient or overcomplicated.This paper describes two genetic learning models of the BBFNN. The first continuous genetic model changes the number of neurons in the hidden layer through the application of specific genetic operators. Each network is coded as a variable length string and some new genetic operators are proposed to evolve a population of individuals. A function is proposed to evaluate the fitness of individual networks. Applications to function approximation problems are considered to demonstrate the performance of the BBFNN and of the evolutionary algorithm. For the second discrete genetic model, each network is coded as a matrix for which number of rows is equal to the number of parameters in the function that will be approximated. The genetic algorithm operators change the number of neurons in the hidden layer. Some applications to functions with one and two parameters are considered to demonstrate the performance of the genetic model and the ability of genetic algorithm to be used for the design of BBFNN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call