Abstract

We discuss the problem of model complexity control also known as model selection. This problem frequently arises in the context of predictive learning and adaptive estimation of dependencies from finite data. First we review the problem of predictive learning as it relates to model complexity control. Then we discuss several issues important for practical implementation of complexity control, using the framework provided by Statistical Learning Theory (or Vapnik-Chervonenkis theory). Finally, we show practical applications of Vapnik-Chervonenkis (VC) generalization bounds for model complexity control. Empirical comparisons of different methods for complexity control suggest practical advantages of using VC-based model selection in settings where VC generalization bounds can be rigorously applied. We also argue that VC-theory provides methodological framework for complexity control even when its technical results can not be directly applied.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call