Abstract
Methods based on partial least squares (PLS) regression, which has recently gained much attention in the analysis of high-dimensional genomic datasets, have been developed since the early 2000s for performing variable selection. Most of these techniques rely on tuning parameters that are often determined by cross-validation (CV) based methods, which raises essential stability issues. To overcome this, we have developed a new dynamic bootstrap-based method for significant predictor selection, suitable for both PLS regression and its incorporation into generalized linear models (GPLS). It relies on establishing bootstrap confidence intervals, which allows testing of the significance of predictors at preset type I risk α, and avoids CV. We have also developed adapted versions of sparse PLS (SPLS) and sparse GPLS regression (SGPLS), using a recently introduced non-parametric bootstrap-based technique to determine the numbers of components. We compare their variable selection reliability and stability concerning tuning parameters determination and their predictive ability, using simulated data for PLS and real microarray gene expression data for PLS-logistic classification. We observe that our new dynamic bootstrap-based method has the property of best separating random noise in y from the relevant information with respect to other methods, leading to better accuracy and predictive abilities, especially for non-negligible noise levels.
Highlights
Partial least squares (PLS) regression, introduced by [1], is a well-known dimension-reduction method, notably in chemometrics and spectrometric modeling [2]
We focus on the second type of adapted PLS regression, referred to on as GPLS
In order to take into account these theoretical results, we have developed a new dynamic bootstrap-based approach for variable selection relevant for both the PLS and GPLS frameworks
Summary
Partial least squares (PLS) regression, introduced by [1], is a well-known dimension-reduction method, notably in chemometrics and spectrometric modeling [2]. We focus on the PLS univariate response framework, better known as PLS1. Let n be the number of observations and p the number of covariates. Yn)T ∈ Rn represents the response vector, with (.)T denoting the transpose. The original underlying algorithm, developed to deal with continuous responses, consists of building latent variables tk, 1 # k # K, called components, as linear combinations of the original predictors X Xp) ∈ Mn,p(R), where Mn,p(R) represents the set of matrices of n rows and p columns.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.