Abstract

Multiple kernel learning (MKL) for feature selection utilizes kernels to explore complex properties of features, which has been shown to be among the most effective for feature selection. To perform feature selection, a natural way is to use the l0-norm to get sparse solutions. However, the optimization problem involving l0-norm is NP-hard. Therefore, previous MKL methods typically utilize a l1-norm to get sparse kernel combinations. However, the l1-norm, as a convex approximation of l0-norm, sometimes cannot attain the desired solution of the l0-norm regularizer problem and may lead to prediction accuracy loss. In contrast, various non-convex approximations of l0-norm have been proposed and perform better in many linear feature selection methods. In this paper, we propose a novel l0-norm based MKL method (l0-MKL) for feature selection with non-convex approximations constraint on kernel combination coefficients to select features automatically. Considering the better empirical performance of indefinite kernels than positive kernels, our l0-MKL is built on the primal form of multiple indefinite kernel learning for feature selection. The non-convex optimization problem of l0-MKL is further refumated as a difference of convex functions (DC) programming and solved by DC algorithm (DCA). Experiments on real-world datasets demonstrate that l0-MKL is superior to some related state-of-the-art methods in both feature selection and classification performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.