Abstract

Dimension reduction in a regression analysis of response y given a p-dimensional vector of predictors x reduces the dimension of x by replacing it with a lower-dimensional linear combination β′x of the x's without specifying a parametric model and without loss of information about the conditional distribution of y given x. We unify three existing methods, sliced inverse regression (SIR), sliced average variance estimate (SAVE), and principal Hessian directions (pHd), into a larger class of methods. Each method estimates a particular candidate matrix, essentially a matrix of parameters. We introduce broad classes of dimension reduction candidate matrices, and we distinguish estimators of the matrices from the matrices themselves. Given these classes of methods and several ways to estimate any matrix, we now have the problem of selecting a particular matrix and estimation method. We propose bootstrap methodology to select among candidate matrices, estimators and dimension, and in particular we investigate linear combinations of different methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call