Abstract

We show that if the closure of a function class F under the metric induced by some probability distribution is not convex, then the sample complexity for agnostically learning F with squared loss (using only hypotheses in F) is /spl Omega/(ln(1//spl delta/)//spl epsiv//sup 2/) where 1-/spl delta/ is the probability of success and /spl epsiv/ is the required accuracy. In comparison, if the class F is convex and has finite pseudodimension, then the sample complexity is O(1//spl epsiv/(ln(1//spl epsiv/)+ln(1/b)). If a nonconvex class F has finite pseudodimension, then the sample complexity for agnostically learning the closure of the convex hull of F, is O(1//spl epsiv/(1//spl epsiv/(ln(1//spl epsiv/)+ln(1//spl delta/)). Hence, for agnostic learning, learning the convex hull provides better approximation capabilities with little sample complexity penalty.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call