Abstract

Probabilistic neural network is a variant of feedforward neural network models and has been successfully applied for various pattern classification purposes. Unlike other feedforward neural network models, probabilistic neural network, a type of radial basis function network models, has essentially only two types of the network-parameters to choose in advance, i.e. the locations of the centers and a single value of radius; a central issue relevant to the application of probabilistic neural network is therefore to determine an appropriate number of the centers accommodated within the network. In the original probabilistic neural network framework, all the training data are allocated to the respective centroid vectors, and thus the network size generally tends to be large, resulting in demanding computational resource. To alleviate this problem, clustering algorithms are commonly employed to shrink the size of the training data. In this work, reduction in the number of centers in a probabilistic neural network is addressed, via the utility of first neighbor means clustering algorithm that is non-iterative and requires only a single algorithmic hyper-parameter; such a choice is desirable in practice. Simulation results using seven publicly available databases for pattern classification tasks show that the first neighbor means clustering algorithm can yield a relatively compact-sized network within short computation time, while exhibiting a reasonably high classification performance, in comparison with communication with local agents, k-means, orthogonal least squares, resource allocating network, and resource vector machine algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call