Abstract

Probabilistic neural networks (PNN) build internal density representations based on the kernel or Parzen estimator and use Bayesian decision theory in order to build up arbitrarily complex decision boundaries. As in the classical kernel estimator, the training is performed in a single pass of the data and asymptotic convergence is guaranteed. Asymptotic convergence, while necessary, says little about discrete sample estimation errors. These errors can be quite large. One problem that arises using either the kernel estimator or the PNN is when one or more of the densities being estimated has a discontinuity. This commonly leads to a pdfL<SUB>(infinity</SUB> ) expected error on the order of the amount of the discontinuity which can in turn lead to significant classification errors. By using the method of reflected kernels, we have developed a PNN model that does not suffer from this problem. The theory of reflected kernel PNNs, along with their relation to reflected kernel Parzen estimators, is presented along with finite sample examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call