Abstract

BackgroundMost genomic data have ultra-high dimensions with more than 10,000 genes (probes). Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes), directly identifying a small subset of genes from ultra-high (m > 10, 000) dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds) of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes.ResultsThe accelerated failure time (AFT) model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes) is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited.ConclusionsOur proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

Highlights

  • Most genomic data have ultra-high dimensions with more than 10,000 genes

  • LASSO type penalty and its generalized versions have been proposed for gene selection with high dimensional genomic profiles with censored survival outcomes [15,16,17,18]

  • In current microarray analysis, what people really do is select a couple of thousands of genes using filter-based methods and apply the LASSOtype penalty to further reduce the number of disease associated genes

Read more

Summary

Introduction

Most genomic data have ultra-high dimensions with more than 10,000 genes (probes). Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. What people really do is select a couple of thousands (or hundreds) of genes using univariate analysis or statistical tests, and apply the LASSO-type penalty to further reduce the number of disease associated genes This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. In current microarray analysis, what people really do is select a couple of thousands (or hundreds) of genes using filter-based methods (such as T-test) and apply the LASSOtype penalty to further reduce the number of disease associated genes. This two-step procedure will lead to missing biologically important genes and introducing bias. The dual solution with kernel proposed in this article attempts to resolve these inadequacies by solving a much smaller n × n matrix

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call