Abstract

In this paper, we review three families of methods in linear sufficient dimension reduction through optimization. Through minimization of general loss functions, we cast classical methods, such as ordinary least squares and sliced inverse regression, and modern methods, such as principal support vector machines and principal quantile regression, under a unified framework. Then we review sufficient dimension reduction methods through maximizing dependence measures, which include the distance covariance, the Hilbert–Schmidt independence criterion, the martingale difference divergence, and the expected conditional difference. Last but not least, we provide an information-theoretic perspective for the third family of sufficient dimension reduction methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call