Abstract

Deep Gaussian processes (DGPs) are the natural extension of Gaussian processes (GPs) to a multi-layer architecture. DGPs are powerful probabilistic models that have shown better results than standard GPs in terms of generalization performance and prediction uncertainty estimation. Nevertheless, exact inference in DGPs is intractable, making these models hard to train. For this task, current approaches in the literature use approximate inference techniques such as variational inference or approximate expectation propagation. In this work, we present a new method for inference in DGPs using an approximate inference technique based on Monte Carlo methods and the expectation propagation algorithm. Our experiments show that our method scales well to large datasets and that its performance is comparable or better than other state of the art methods. Furthermore, our training method leads to interesting properties in the predictive distribution of the DGP. More precisely, it is able to capture output noise that is dependent on the input and it can generate multimodal predictive distributions. These two properties, which are not shared by other state-of-the-art approximate inference methods for DGPs, are analyzed in detail in our experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call