Abstract

A PAC teaching model -under helpful distributions - is proposed which introduces the classical ideas of teaching models within the PAC setting: a polynomial-sized teaching set is associated with each target concept; the criterion of success is PAC identification; an additional parameter, namely the inverse of the minimum probability assigned to any example in the teaching set, is associated with each distribution; the learning algorithm running time takes this new parameter into account. An Occam razor theorem and its converse are proved. Some classical classes of boolean functions, such as Decision Lists, DNF and CNF formulas are proved learnable in this model. Comparisons with other teaching models are made: learnability in the Goldman and Mathias model implies PAC learnability under helpful distributions. Note that Decision lists and DNF are not known to be learnable in the Goldman and Mathias model. A new PAC model, where simple refers to Kolmogorov complexity, is introduced. We show that most learnability results obtained within previously defined PAC models can be simply derived from more general results in our model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call