Abstract
We present a sparse representer theorem for regularization networks in a reproducing kernel Banach space with the $ \ell^1 $ norm by the theory of convex analysis. The theorem states that extreme points of the solution set of regularization networks in such a sparsity-promoting space belong to the span of kernel functions centered on at most $ n $ adaptive points of the input space, where $ n $ is the number of training data. Under the Lebesgue constant assumptions on reproducing kernels, we can recover the relaxed representer theorem and the exact representer theorem in that space in the literature. Finally, we perform numerical experiments for synthetic data and real-world benchmark data in the reproducing kernel Banach spaces with the $ \ell^1 $ norm and the reproducing kernel Hilbert spaces both with Laplacian kernels. The numerical performance demonstrates the advantages of sparse regularized learning.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have