Abstract

We propose the Laplace method to derive approximate inference for Gaussian process (GP) regression in the location and scale parameters of the student- {t} probabilistic model. This allows both mean and variance of data to vary as a function of covariates with the attractive feature that the student- {t} model has been widely used as a useful tool for robustifying data analysis. The challenge in the approximate inference for the model, lies in the analytical intractability of the posterior distribution and the lack of concavity of the log-likelihood function. We present the natural gradient adaptation for the estimation process which primarily relies on the property that the student- {t} model naturally has orthogonal parametrization. Due to this particular property of the model the Laplace approximation becomes significantly more robust than the traditional approach using Newton’s methods. We also introduce an alternative Laplace approximation by using model’s Fisher information matrix. According to experiments this alternative approximation provides very similar posterior approximations and predictive performance to the traditional Laplace approximation with model’s Hessian matrix. However, the proposed Laplace–Fisher approximation is faster and more stable to calculate compared to the traditional Laplace approximation. We also compare both of these Laplace approximations with the Markov chain Monte Carlo (MCMC) method. We discuss how our approach can, in general, improve the inference algorithm in cases where the probabilistic model assumed for the data is not log-concave.

Highlights

  • Numerous applications in statistics and machine learning communities are fraught with datasets where some data points appear to strongly deviate from the bulk of the remaining

  • The inference algorithm for estimating the parameters of the Laplace approximation presented here is general. It closely follows the stable implementation of the Laplace approximation for log-concave likelihoods presented by Rasmussen and Williams (2006) with only minor modifications and, generalizes this stable algorithm for general not log-concave likelihoods and multivariate Gaussian process models as well. These general properties are attractive for other types of models and, we present an example of orthogonal reparametrization for the Weibull probabilistic model and discuss its possible benefits before introducing the GP regression in the heteroscedastic student-t model

  • Many approximative methods have been proposed to approximate the posterior distribution of the Gaussian process model with homoscedastic student-t probabilistic model for the data

Read more

Summary

Introduction

Numerous applications in statistics and machine learning communities are fraught with datasets where some data points appear to strongly deviate from the bulk of the remaining. The difficulty in the estimation process of the parameters of the Laplace approximation, discussed by Vanhatalo et al (2009) and Jylänki et al (2011), is circumvented by firstly noting that the location and scale parameters of the student-t model are orthogonal (Cox and Reid 1987; Huzurbazar 1956; Achcar 1994) This particular property of the student-t model will readily allow us to propose an efficient inference algorithm for the Laplace approximation based on the natural gradient of Amari (1998) ( known as the Fisher score algorithm in Statistics).

Aspects of orthogonal parametrization for statistical models
Gaussian process regression with the heteroscedastic student-t model
Student-t model and basic properties
Gaussian process regression in the location and scale parameter
Approximate inference with the Laplace method
Laplace approximation
Laplace–Fisher approximation
Approximate posterior contraction and outliers
Prediction of future outcomes with the Laplace approximation
On the computational implementation
Natural gradient for finding the mode
Approximate marginal likelihood and parameter adaptation
Experiments
Priors for the GP hyperparameters and degrees-of-freedom parameter
Simulated data with simple regressions
Predictive performance on real datasets
Computational performance in simulated and real data
Concluding remarks and discussion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.