Abstract

If two sets are linearly separable (LS), then there exists a single layer perceptron feedforward neural network that classifies them. We propose three methods for testing linear separability. The first method is based on the notion of convex hull, the second on a halting upper bound for the perceptron learning algorithm, and the third one, called the class of linear separability algorithm, is based on the characterization of the set of points by which passes a hyperplane that linearly separates two classes. We also study the treatment of nonlinearly separable (NLS) classification problems. We propose two solutions for handling NLS problems. The first solution is an approximation of an NLS classification problem by means of an LS one. We prove that the problem of finding the best approximation is NP-complete. The second solution is an NLS to LS transformation algorithm which can be used for building a multilayer feedforward neural networks called the Recursive Deterministic Perceptron (RDP). This neural network model can be used for solving any two class classification problems even if the two classes are NLS. We prove that the construction of an RDP linearly separating two classes X and Y can be done in less than n steps where n is the cardinal of the set X∪ Y. The geometrical halting upper bound for the perceptron algorithm and the class of linear separability algorithm are the principal results of this paper.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call