Abstract

This paper presents a simple sequential growing and pruning algorithm for radial basis function (RBF) networks. The algorithm referred to as growing and pruning (GAP)-RBF uses the concept of "Significance" of a neuron and links it to the learning accuracy. "Significance" of a neuron is defined as its contribution to the network output averaged over all the input data received so far. Using a piecewise-linear approximation for the Gaussian function, a simple and efficient way of computing this significance has been derived for uniformly distributed input data. In the GAP-RBF algorithm, the growing and pruning are based on the significance of the "nearest" neuron. In this paper, the performance of the GAP-RBF learning algorithm is compared with other well-known sequential learning algorithms like RAN, RANEKF, and MRAN on an artificial problem with uniform input distribution and three real-world nonuniform, higher dimensional benchmark problems. The results indicate that the GAP-RBF algorithm can provide comparable generalization performance with a considerably reduced network size and training time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call