Abstract

The goal of this article is to introduce an iterative application of dimension reduction methods. It is known that in some situations, methods such as Sliced Inverse Regression (SIR), Ordinary Least Squares (OLS) and Cumulative Mean Estimation (CUME) are able to find only a partial basis for the dimension reduction subspace. However, for many models these methods are very good estimators of this partial basis. In this paper we propose a simple iterative procedure which differs from existing combined approaches in the sense that the initial partial basis is estimated first and the second dimension reduction approach seeks only the remainder of the dimension reduction subspace. Our approach is compared against that of existing combined dimension reduction approaches via simulated data as well as two example data sets.

Highlights

  • In regression analysis, one of the challenges faced with multi-dimensional data sets is graphical visualization to allow the relationship(s) between response and predictor variables to be detected

  • Since Cumulative Mean Estimation (CUME) requires fewer conditions on x than what is required for Cumulative Variance Estimation (CUVE), but where CUME may be restricted due to symmetric dependency, we focus on the application of CUME to obtain a partial basis followed by CUVE to find the remaining elements of a basis for S

  • We introduced a new way of combining existing dimension reduction methods

Read more

Summary

Introduction

One of the challenges faced with multi-dimensional data sets is graphical visualization to allow the relationship(s) between response and predictor variables to be detected. Within the past 20 years, in order to combat this challenge, several dimension reduction techniques that are very simple to implement have been introduced, including Sliced Inverse Regression (SIR). Whilst classical slicing estimation methods such as SIR and SAVE require dividing the response into a discrete number of slices, more recent methods have been proposed which negate the need for slicing, such as Discretization-Expectation Estimation [27], Cumulative Mean Estimation (CUME), Cumulative Variance Estimation (CUVE) and Cumulative Directional. ΒK⊤ x reduces the dimension of the model, since the dimension of the latter is K < p. For identifiability purposes we will assume throughout that S is a central dimension reduction (CDR) subspace, which Cook [6] defined to be the intersection of all dimension reduction subspaces

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call