Abstract

We propose a variational inference-based framework for training a Gaussian process regression model subject to censored observational data. Data censoring is a typical problem encountered during the data gathering procedure and requires specialized techniques to perform inference since the resulting probabilistic models are typically analytically intractable. In this article we exploit the variational sparse Gaussian process inducing variable framework and local variational methods to compute an analytically tractable lower bound on the true log marginal likelihood of the probabilistic model which can be used to perform Bayesian model training and inference. We demonstrate the proposed framework on synthetically-produced, noise-corrupted observational data, as well as on a real-world data set, subject to artificial censoring. The resulting predictions are comparable to existing methods to account for data censoring, but provides a significant reduction in computational cost.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call