Abstract

Kernel mixture models are routinely used for density estimation. However, in multivariate settings, issues arise in efficiently approximating lower-dimensional structure in the data. For example, it is common to suppose that the density is concentrated near a lower-dimensional non-linear subspace or manifold. Typical kernels used to locally approximate such subspaces are inflexible, so that a large number of components are often needed. We propose a novel class of LOcally COnvex (LOCO) kernels that are flexible in adapting to nonlinear local structure. LOCO kernels are induced by introducing random knots within local neighborhoods, and generating data as a random convex combination of these knots with adaptive weights and an additive noise. For identifiability, we constrain all observations from a particular component to have the same mean. For Bayesian inference subject to this constraint, we develop a hybrid Gibbs sampler and optimization algorithm that incorporates a Lagrange multiplier within a splitting method. The resulting LOCO algorithm is shown to dramatically outperform typical Gaussian mixture models in challenging examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call