Abstract

The Recursive Deterministic Perceptron (RDP) feed-forward multilayer neural network is a generalisation of the single layer perceptron topology. This model is capable of solving any two-class classification problem as opposed to the single layer perceptron which can only solve classification problems dealing with linearly separable sets. For all classification problems, the construction of an RDP is done automatically and convergence is always guaranteed. A generalization of the 2-class Recursive Deterministic Perceptron (RDP) exists. This generalization allows to always separate, in a deterministic way, m-classes. It is based on a new notion of linear separability and it falls naturally from the 2 valued RDP. The methods for building 2-class RDP neural networks have been extensively tested. However, no testing has been done before on the m-class RDP method. For the first time, a study on the performance of the m-class method is presented. This study will allow the highlighting of the main advantages and disadvantages of this method by comparing the results obtained while building m-class RDP neural networks with other more classical methods such as Backpropagation and Cascade Correlation. The networks were trained and tested using the following standard benchmark classification datasets: IRIS, SOYBEAN, and Wisconsin Breast Cancer.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call