Abstract

In this paper, we question two assumptions of the pac learning model related to the distribution of examples: that the learning algorithm must learn for an arbitrary distribution (the distribution-independence assumption), and that the distribution of training examples is identical to the distribution of examples that the learning system sees in use (the distribution-invariance assumption). We argue that the distribution-independence assumption is too stringent, and we propose a learning model in which the distributions are required to satisfy a parametrized “reasonableness” criterion. This model allows us to extend results for learning functions under uniform distributions to non-uniform distributions. As an example, a bound is given for the time to learn an arbitrary halfspace in this model using the perceptron algorithm. We also argue that the distribution-invariance assumption in the pac model is unrealistic and we give bounds on the sample complexity when the distributions of training and testing examples differ.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call