Abstract

We consider the problem of searching a domain for points that have a desired property, in the special case where the objective function that determines the properties of points is unknown and must be learned during search. We give a parallel to PAC learning theory that is appropriate for reasoning about the sample complexity of this problem. The learner queries the true objective function at selected points, and uses this information to choose models of the objective function from a given hypothesis class that is known to contain a correct model. These models are used to focus the search on more promising areas of the domain. The goal is to find a point with the desired property in a small number of queries. We define an analog to VC dimension, needle dimension, to be the size of the largest sample in which any single point could have the desired property without the other points' values revealing this information. We give an upper bound on sample complexity that is linear in needle dimension for a natural type of search protocol and a linear lower bound for a class of constrained problems. We also describe the relationship between needle dimension and VC dimension, explore connections between model-based search and active concept learning (including several novel positive results in active learning), and consider a scale-sensitive version of needle dimension. Several simple examples illustrate the dependence of needle dimension on features of search problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call