Abstract

Many techniques have been proposed for improving the generalization performance of fuzzy ARTMAP. We present a study of these architectures in the framework of structural risk minimization and computational learning theory. Fuzzy ARTMAP training uses on-line learning, has proven convergence results, and has relatively few parameters to deal with. Empirical risk minimization is employed by fuzzy ARTMAP during its training phase. One weakness of fuzzy ARTMAP concerns over-training on noisy training data sets or naturally overlapping training classes of data. Most of these proposed techniques attempt to address this issue, in different ways, either directly or indirectly. In this paper we will present a summary of how some of these architectures achieve success as learning algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call