Abstract

This paper presents an easy to use, constructive training algorithm for probabilistic neural networks, a special type of radial basis function networks. In contrast to other algorithms, predefinition of the network topology is not required. The proposed algorithm introduces new hidden units whenever necessary and adjusts the shape of already existing units individually to minimize the risk of misclassification. This leads to smaller networks compared to classical PNNs and therefore enables the use of large data sets. Using eight classification benchmarks from the StatLog project, the new algorithm is compared to other state of the art classification methods. It is demonstrated that the proposed algorithm generates probabilistic neural networks that achieve a comparable classification performance on these data sets. Only two rather uncritical parameters are required to be adjusted manually and there is no danger of overtraining – the algorithm clearly indicates the end of training. In addition, the networks generated are small due to the lack of redundant neurons in the hidden layer.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call