Abstract

In a typical algorithmic learning model, a learner has to identify a target object from partial information. Conversely, in a teaching model a teacher has to give information that allows the learners to identify a target object. We devise two variants of the classical teaching model for Boolean concept classes, based on the teaching dimension, and describe them by teaching-dimension-like combinatorial parameters. In the first model, the learners choose consistent hypotheses with least complexity. We show that 1-decision lists are the harder to teach the longer they are and that 2-term DNFs are the harder to teach the more terms they have. This contrasts with the teachability results for these classes in the teaching-dimension model. In our second model, the learners choose consistent hypotheses based on the assumption that the teacher is optimal. We show that monomials can be taught with a linear number of examples, whereas some 1-decision lists need exponentially many.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call