Abstract
Multiple kernel learning (MKL) was proposed to deal with kernel fusion. MKL learns a linear combination of several kernels and solves the supporting vector machine (SVM) associated with the combined kernel simultaneously. Current framework of MKL encourages sparsity of the kernel combination coefficients. When a significant portion of the kernels are informative, forcing sparsity tends to select only a few kernels and may ignore useful information. In this paper, we propose elastic multiple kernel learning (EMKL) to achieve adaptive kernel fusion. EMKL makes use of a mixing regularization function to compromise sparsity and non-sparsity. Both MKL and SVM could be regarded as special cases of EMKL. Based on gradient descent algorithm for MKL problem, we propose a fast algorithm to solve EMKL problem. Results on the simulation datasets demonstrate that the performance of EMKL compares favorably to both MKL and SVM. We further apply EMKL to gene set analysis and get promising results. Finally, we study the theoretical advantage of EMKL comparing to other non-sparse MKL.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.