Abstract

Probably approximate learning is the inference of a set (also called a concept) from a small number of sample points, under the probabilistic measure of success proposed in [L. G. Valiant, Proc. ACM Symposium on Theory of Computing, 1984, pp. 436–445]. Within this model, necessary and sufficient conditions are identified for efficient learning of a class of concepts defined on the strings of the binary alphabet. These results are with respect to both time and information complexity measures and are in terms of the asymptotic behaviour of the Vapnik–Chervonenkis dimension of the class. Corresponding results are also obtained for the case when the error in the learning process is to be one-sided, in that the inferred concept is required to be a subset of the concept to be learned. The scope of the learning model is then widened to include the inference of functions. The Vapnik–Chervonenkis dimension is also extended to obtain a measure called the “generalized dimension” of a class of functions. Using this m...

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call