Abstract
It is shown that under certain conditions the backpropagation network classifier can produce nonintuitive, nonrobust decision surfaces. These result from the inherent nature of the sigmoid transfer function, the definition of the training set, and the error function used for training. The backpropagation network has no mechanism in the standard training scheme for identifying regions not in any known classes. The radial basis function network overcomes these difficulties by using a nonmonotonic transfer function based on the Gaussian density function. While producing robust decision surfaces, the radial basis function also provides an estimate of how close a test case is to the original training data, allowing the classifier to signal that a test case potentially represents a novel class while still presenting the most plausible classification. For applications where this type of behavior is important, such as fault diagnosis, the radial basis function network is shown to offer clear advantages over the backpropagation network. The radial basis function is also faster to train because the training of the two layers is decoupled.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">></ETX>
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.