Abstract

AbstractSufficient dimension reduction (SDR) is effective in high‐dimensional data analysis as it mitigates the curse of dimensionality while retaining full regression information. Missing predictors are common in high‐dimensional data, yet are only discussed occasionally in the SDR context. In this paper, an inverse probability weighted sliced inverse regression (SIR) is studied with predictors missing at random. We cast SIR into the estimating equation framework to avoid inverting a large scale covariance matrix. This strategy is more efficient in handling large dimensionality and strong collinearity among the predictors than the spectral decomposition of classical SIR. Numerical studies confirm the supremacy of our proposed procedure over existing methods. © 2011 Wiley Periodicals, Inc. Statistical Analysis and Data Mining, 2011

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call