Abstract

Four connectionistic/neural models which are capable of learning arbitrary Boolean functions are presented. Three are probably convergent, but of differing generalization power. The fourth is not necessarily convergent, but its empirical behavior is quite good. The time and space characteristics of the four models are compared over a diverse range of functions and testing conditions. These include the ability to learn specific instances, to effectively generalize, and to deal with irrelevant or redundant information. Trade-offs between time and space are demonstrated by the various approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call