Abstract

Correspondence analysis (CA) is a multivariate statistical tool used to visualize and interpret data dependencies by finding maximally correlated embeddings of pairs of random variables. CA has found applications in fields ranging from epidemiology to social sciences. However, current methods for CA do not scale to large, high-dimensional datasets. In this paper, we provide a novel interpretation of CA in terms of an information-theoretic quantity called the principal inertia components. We show that estimating the principal inertia components, which consists in solving a functional optimization problem over the space of finite variance functions of two random variable, is equivalent to performing CA. We then leverage this insight to design algorithms to perform CA at scale. Specifically, we demonstrate how the principal inertia components can be reliably approximated from data using deep neural networks. Finally, we show how the maximally correlated embeddings of pairs of random variables in CA further play a central role in several learning problems including multi-view and multi-modal learning methods and visualization of classification boundaries.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call