Abstract

A formal definition of learning is proposed in which the probability distribution of examples is restricted to a family of reasonable distributions. The definition is more useful for the analysis of computational complexity of learning algorithms than Valiant's distribution-independent learning protocol. We give an upper bound on the time taken by the perceptron algorithm to learn a half-space under this definition. The definition makes obvious the effects of the distribution's characteristics on learning performance. We investigate perceptron-like algorithms that choose their own training examples, and show how this affects learning time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call