Abstract

Dimensionality reduction is an unsupervised learning task aimed at creating a low-dimensional summary and/or extracting the most salient features of a dataset. Principal component analysis is a linear dimensionality reduction method in the sense that each principal component is a linear combination of the input variables. To allow features that are nonlinear functions of the input variables, many nonlinear dimensionality reduction (NLDR) methods have been proposed. In this article, we propose novel NLDR methods based on bottleneck deep autoencoders. Our contributions are 2-fold: (1) We introduce a monotonicity constraint into bottleneck deep autoencoders for estimating a single nonlinear component and propose two methods for fitting the model. (2) We propose a new, forward stepwise deep learning architecture for estimating multiple nonlinear components. The former helps extract interpretable, monotone components when the assumption of monotonicity holds, and the latter helps evaluate reconstruction errors in the original data space for a range of components. We conduct numerical studies to compare different model fitting methods and use two real data examples from the studies of human immune responses to HIV to illustrate the proposed methods. Supplementary materials for this article are available online.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call