Abstract

We investigate how the training curve of isotropic kernel methods depends on the symmetry of the task to be learned, in several settings. (i) We consider a regression task, where the target function is a Gaussian random field that depends only on variables, fewer than the input dimension d. We compute the expected test error ϵ that follows where p is the size of the training set. We find that β ∼ 1/d independently of , supporting previous findings that the presence of invariants does not resolve the curse of dimensionality for kernel regression. (ii) Next we consider support-vector binary classification and introduce the stripe model, where the data label depends on a single coordinate , corresponding to parallel decision boundaries separating labels of different signs, and consider that there is no margin at these interfaces. We argue and confirm numerically that, for large bandwidth, , where ξ ∈ (0, 2) is the exponent characterizing the singularity of the kernel at the origin. This estimation improves classical bounds obtainable from Rademacher complexity. In this setting there is no curse of dimensionality since as . (iii) We confirm these findings for the spherical model, for which . (iv) In the stripe model, we show that, if the data are compressed along their invariants by some factor λ (an operation believed to take place in deep networks), the test error is reduced by a factor .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call