Abstract

We consider the regression problem by learning with a regularization scheme in a data dependent hypothesis space and ℓ 1 -regularizer. The data dependence nature of the kernel-based hypothesis space provides flexibility for the learning algorithm. The regularization scheme is essentially different from the standard one in a reproducing kernel Hilbert space: the kernel is not necessarily symmetric or positive semi-definite and the regularizer is the ℓ 1 -norm of a function expansion involving samples. The differences lead to additional difficulty in the error analysis. In this paper we apply concentration techniques with ℓ 2 -empirical covering numbers to improve the learning rates for the algorithm. Sparsity of the algorithm is studied based on our error analysis. We also show that a function space involved in the error analysis induced by the ℓ 1 -regularizer and non-symmetric kernel has nice behaviors in terms of the ℓ 2 -empirical covering numbers of its unit ball.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call