Abstract

We present HILLARY, an incremental learning method that addresses several of the more difficult aspects of learning from examples. Specifically, HILLARY employs ‘hill climbing’ to incrementally learn disjunctive concepts from noisy data in either a relational or at tribute-value representation. In the treatment of these aspects, we have noticed an interesting tradeoff between the simplicity of candidate concept descriptions and their coverage of previously seen instances. We discuss HILLARY's learning algorithm, tradeoff, and evaluation function, and we present empirical studies of the system's learning behavior on both natural and artificial domains. We show that HILLARY's performance deteriorates linearly with the amount of noise, independent of the memory limitations. Also, our results show that small improvements in performance are gained at the expense of large increases in the number of disjuncts demonstrating the relevance and importance of the tradeoff. We conclude with ideas for future research.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call