Abstract
We confirm by the multi-Gaussian support vector machine (SVM) classification that the information of the intrinsic dimension of Riemannian manifolds can be used to illustrate the efficiency (learning rates) of learning algorithms. We study an approximation scheme realized by convolution operators involving the Gaussian kernels with flexible variances. The essential analysis lies in the study of its approximation order in Lp (1 ≤ p < ∞) norm as the variance of the Gaussian tends to zero. It is different from the analysis for approximation in C(X) since pointwise estimations do not work any more. The Lp approximation arises from the SVM case where the approximated function is the Bayes rule and is not continuous, in general. The approximation error is estimated by imposing a regularity condition that the approximated function lies in some interpolation spaces. Then, the learning rates for multi-Gaussian regularized classifiers with general classification loss functions are derived, and the rates depend on the intrinsic dimension of the Riemannian manifold instead of the dimension of the underlying Euclidean space. Here, the input space is assumed to be a connected compact C∞ Riemannian submanifold of ℝn. The uniform normal neighborhoods of the Riemannian manifold and the radial basis form of Gaussian kernels play an important role.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.