Abstract

In kernel-based learning, the random projection method, also called random sketching, has been successfully used in kernel ridge regression to reduce the computational burden in the big data setting, and at the same time retain the minimax convergence rate. In this work, we consider its use in sparse multiple kernel learning problems where a closed-form optimizer is not available, which poses significant technical challenges, for which the existing results do not carry over directly. Even when random projection is not used, our risk bound improves on the existing results in several aspects. We also illustrate the use of random projection via some numerical examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call