The linear threshold element (LTE), or perceptron, is a linear classifier with limited capabilities due to the problems arising when the input pattern set is linearly nonseparable. Assuming that the patterns are presented in a sequential fashion, we derive a theory for the detection of linear nonseparability as soon as it appears in the pattern set. This theory is based on the precise determination of the solution region in the weight space with the help of a special set of vectors. For this region, called the solution cone, we present a recursive computation procedure which allows immediate detection of nonseparability. The separability-violating patterns may be skipped so that, at the end, we derive a totally separable subset of the original pattern set along with its solution cone. The intriguing aspect of this algorithm is that it can be directly cast into a simple neural-network implementation. In this model the synaptic weights are committed (they are updated only once, and the only change that may happen after that is their destruction). This bears resemblance to the behavior of biological neural networks, and it is a feature unlike those of most other artificial neural techniques. Finally, by combining many such neural models we develop a learning procedure capable of separating convex classes.
Read full abstract