It is shown that under certain conditions the backpropagation network classifier can produce nonintuitive, nonrobust decision surfaces. These result from the inherent nature of the sigmoid transfer function, the definition of the training set, and the error function used for training. The backpropagation network has no mechanism in the standard training scheme for identifying regions not in any known classes. The radial basis function network overcomes these difficulties by using a nonmonotonic transfer function based on the Gaussian density function. While producing robust decision surfaces, the radial basis function also provides an estimate of how close a test case is to the original training data, allowing the classifier to signal that a test case potentially represents a novel class while still presenting the most plausible classification. For applications where this type of behavior is important, such as fault diagnosis, the radial basis function network is shown to offer clear advantages over the backpropagation network. The radial basis function is also faster to train because the training of the two layers is decoupled.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">></ETX>
Read full abstract