Abstract

In this article, an iterative procedure is proposed for the training process of the probabilistic neural network (PNN). In each stage of this procedure, the Q(0)-learning algorithm is utilized for the adaptation of PNN smoothing parameter (?). Four classes of PNN models are regarded in this study. In the case of the first, simplest model, the smoothing parameter takes the form of a scalar; for the second model, ? is a vector whose elements are computed with respect to the class index; the third considered model has the smoothing parameter vector for which all components are determined depending on each input attribute; finally, the last and the most complex of the analyzed networks, uses the matrix of smoothing parameters where each element is dependent on both class and input feature index. The main idea of the presented approach is based on the appropriate update of the smoothing parameter values according to the Q(0)-learning algorithm. The proposed procedure is verified on six repository data sets. The prediction ability of the algorithm is assessed by computing the test accuracy on 10 %, 20 %, 30 %, and 40 % of examples drawn randomly from each input data set. The results are compared with the test accuracy obtained by PNN trained using the conjugate gradient procedure, support vector machine algorithm, gene expression programming classifier, k---Means method, multilayer perceptron, radial basis function neural network and learning vector quantization neural network. It is shown that the presented procedure can be applied to the automatic adaptation of the smoothing parameter of each of the considered PNN models and that this is an alternative training method. PNN trained by the Q(0)-learning based approach constitutes a classifier which can be treated as one of the top models in data classification problems.

Highlights

  • Probabilistic neural network (PNN) is an example of the radial basis function based model effectively used in data classification problems

  • The results of our proposed solution are compared to the outcomes of PNN for which the smoothing parameter is calculated using the conjugate gradient procedure and, to the support vector machine classifier, gene expression programming algorithm, k–Means clustering method, multilayer perceptron, radial basis function neural network and learning vector quantization neural network in medical data classification problems

  • These results are compared with the outcomes obtained by PNN trained using the conjugate gradient procedure (PNNVC–CG), support vector machine (SVM) algorithm, gene expression programming (GEP) classifier, k–Means method, multilayer perceptron (MLP), radial basis function neural network (RBFN) and learning vector quantization neural network (LVQN)

Read more

Summary

Introduction

The set of system states S, the set of actions A and the reinforcement signal r which are required by the Q(0)-learning method will be defined along with the description of the algorithm. The Q(0)-learning proposed by Watkins [44] is one of the most often used This algorithm computes the table of all Q (s, a) values (called Q–table) by successive approximations. Q (s, a) represents the expected pay-off that an agent can obtain in state s after it performs action a. The Q–table is updated for the state-action pair (st , at ) according to the following formula [44]. Where the maximization operator refers to the action value a which may be performed in the state st+1 and α ∈ The formula (9) will be used as the basis of the algorithm for the PNN’s smoothing parameter optimization presented

Probabilistic neural network
C V VC for for for for PNNS
General idea
Experiments
Data sets used in the study
Empirical results
Illustration of the PNN training process
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call