Abstract

This special issue of Machine Learning is dedicated to the 20th Annual Conference on Learning Theory (previously known as the Conference on Computational Learning Theory), held in San Diego, CA, USA, June 13–15, 2007, as part of the 2007 Federated Computing Research Conference (FCRC). The authors of seven papers were invited to submit expanded versions of their conference papers. These papers then went through the standard reviewing process of Machine Learning. The papers have mostly been selected according to theoretical significance, with an eye towards their potential ability to influence future research directions within the context of Learning Theory. Learning Theory is an influential field of Machine Learning aimed at providing a theoretical underpinning to the basic intuition of what learning really means, but also a more “readily accessible” mathematical support to learning systems applied to a wide range of domains. Stated differently, whereas the intuitive notion of “learning” can be formalized in many different ways, the application domain one has in mind does play a fundamental role in suggesting specific learning models, mathematical assumptions, and performance measures. Hence, Learning Theory is more like a collection of mathematical theories for Machine Learning, whose major part concerns the formal quantification of the performance of algorithms operating within the assumed models. Altogether the papers in this special issue represent a snapshot of a variety of current lines of research in theoretical aspects of Machine Learning, ranging from on-line learning of individual sequences to active learning models, from Statistical Learning to Inductive Inference models. Below we briefly introduce each of the papers, and provide some background information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call