Abstract

We deal with the performance bounding of fuzzy ARTMAP and other ART-based neural network architectures, such as boosted ARTMAP, according to the theory of structural risk minimization. Structural risk minimization research indicates a trade-off between training error and hypothesis complexity. This trade-off directly motivated boosted ARTMAP. In this paper, we present empirical evidence for boosted ARTMAP as a viable learning technique, in general, in comparison to fuzzy ARTMAP and other ART-based neural network architectures. We also show direct empirical evidence for decreased hypothesis complexity in conjunction with the improved empirical performance for boosted ARTMAP as compared with fuzzy ARTMAP. Application of the Rademacher penalty to boosted ARTMAP on a specific learning problem further indicates its utility as compared with fuzzy ARTMAP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call