Abstract
Probabilistic neural network (PNN) has a sizable structure since it requires all training records in the activation of its hidden layer. This fact makes it suffer from the problem of the curse of dimensionality. Therefore, an hypotheses can be easily formulated that in order to manage large data classification tasks, it is recommended to minimise its inner design. In this paper, we directly address this issue: the method for the PNN’s architecture reduction is elaborated. It is organised as follows. First, a k-means data clustering is conducted and the obtained centres are stored. Next, one selects a single nearest neighbour to the determined centres considering each class separately. The pattern neurons of a PNN are then established using both (i) the cluster centres and (ii) the records closest to the obtained centroids. The algorithm is applied to the classification tasks of seven repository data sets. The utilised PNN is trained by means of four training techniques with different kernel functions in each case. A 10–fold cross validation method is applied to assess the performance of the original and reduced networks. The obtained results are also compared with those provided by existing methods in the literature. It is shown that in the majority classification cases, it is possible to achieve a higher accuracy of the reduced PNN compared to the original network and the approaches introduced in the literature.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.