Abstract

Let (X,Y) be a random couple, X being an observable instance and Y∈ {–1,1} being a binary label to be predicted based on an observation of the instance. Let (Xi, Yi), i=1, . . . , n be training data consisting of n independent copies of (X,Y). Consider a real valued classifier ${\hat{f}_{n}}$ that minimizes the following penalized empirical risk $$\frac{1}{n}\sum\limits_{i=1}^n \ell(Y_{i}f(X_{i})) + \lambda\|Let (X,Y) be a random couple, X being an observable instance and Y∈ {–1,1} being a binary label to be predicted based on an observation of the instance. Let (Xi, Yi), i=1, . . . , n be training data consisting of n independent copies of (X,Y). Consider a real valued classifier ${\hat{f}_{n}}$ that minimizes the following penalized empirical risk $$\frac{1}{n}\sum\limits_{i=1}^n \ell(Y_{i}f(X_{i})) + \lambda\|f\|^{2} \rightarrow {\rm min}, f\in {\mathcal H}$$ over a Hilbert space ${\mathcal H}$ of functions with norm || ·||, l being a convex loss function and λ >0 being a regularization parameter. In particular, ${\mathcal H}$ might be a Sobolev space or a reproducing kernel Hilbert space. We provide some conditions under which the generalization error of the corresponding binary classifier sign $({\hat{f}_{n}})$ converges to the Bayes risk exponentially fast. $|^{2} \rightarrow {\rm min}, f\in {\mathcal H}$$ over a Hilbert space ${\mathcal H}$ of functions with norm || ·||, l being a convex loss function and λ >0 being a regularization parameter. In particular, ${\mathcal H}$ might be a Sobolev space or a reproducing kernel Hilbert space. We provide some conditions under which the generalization error of the corresponding binary classifier sign $({\hat{f}_{n}})$ converges to the Bayes risk exponentially fast.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call