Abstract

Omics data usually have ultra-high dimension (p) and small sample size (n). Standard support vector machines (SVMs), which minimize the L2 norm for the primal variables, only lead to sparse solutions for the dual variables. L1 based SVMs, directly minimizing the L1 norm, have been used for feature selection with omics data. However, most current methods directly solve the primal formulations of the problem, which are not computationally scalable. The computational complexity increases with the number of features. In addition, L1 norm is known to be asymptotically biased and not consistent for feature selection. In this paper, we develop an efficient method for sparse support vector machines with L0 norm approximation. The proposed method approximates the L0 minimization through solving a series of L2 optimization problems, which can be formulated with dual variables. It finds the optimal solution for p primal variables through estimating n dual variables, which is more efficient as long as the sample size is small. L0 approximation leads to sparsity in both dual and primal variables, and can be used for both feature and sample selections. The proposed method identifies much less number of features and achieves similar performances in simulations. We apply the proposed method to feature selections with metagenomic sequencing and gene expression data. It can identify biologically important genes and taxa efficiently.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call