Abstract
This paper surveys several models of learnability proposed and investigated by computational learning theorists during the past few years. Computational learning theory is the study of learning as seen from a computational complexity point of view. In addition to the usual space and time complexity, computational learning theory studies the sample complexity, the number of examples seen by the learner. (In a statistical setting, this is known as the sample size.) This paper will cover those models of learnability where ideas from Vapnik-Chervonenkis combinatorics have had the greatest impact. There are a few short proofs to give a flavor of some of the ideas involved, but most of the proofs are too long to be included here. The focus is on giving an idea of the variety of models and the relationships between them. For more complete surveys of computational learning theory see (1988), (1990), (1991), (1992), or the proceedings of the annual Workshop on Computational Learning Theory published by Morgan Kaufmann. Some attempt has been made to keep the notation consistent within this paper, which means that it will be inconsistent with a large subset of the references.KeywordsBoolean FunctionConcept ClassEmpirical ProcessLearning ComplexityMembership QueryThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.