Abstract

AbstractDimension reduction for regression analysis has been one of the most popular topics in the past two decades. It sees much progress with the introduction of the inverse regression, centered around the two key methods, sliced inverse regression (SIR) and sliced average variance estimation (SAVE). It is well known that SIR works poorly when the inverse conditional expectation is close to being nonrandom. SAVE and its many generalizations, which do not suffer from this drawback, lag behind SIR in many other circumstances. Usually a certain weighted hybrid of SIR and SAVE is necessary to improve overall performance. However, it is difficult to find the optimal mixture weights in a hybrid, and most such hybrid methods, as well as SAVE, require the restrictive constant (conditional) variance condition. We propose a much weaker condition and a new accompanying algorithm. This enables us to create several new central matrices that perform very favourably to existing central matrix based methods without referring to hybrids. The Canadian Journal of Statistics 41: 421–438; 2013 © 2013 Statistical Society of Canada

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.