Abstract

This paper proposes a solution for inconsistency pruning of neurons within a sequential learning Radial Basis Function (RBF) Network. This paper adopts the concept that a specific RBF neuron which continuously exhibits low output in a sequence of training patterns does not justify the proposition that the neuron is insignificant to the whole function to be approximated. We establish additional criterions to provide protection from error in pruning RBF neurons within the hidden layer, which we prove is able to improve consistency and stability of neuron evolution. With such stability within the sequential learning process, we also show how the convergence speed of the network can be improved by reducing the number of consecutive observations required to prune a neuron in the hidden layer.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call