Abstract

Gaussian processes regression models are an appealing machine learning method as they learn expressive nonlinear models from exemplar data with minimal parameter tuning and estimate both the mean and covariance of unseen points. However, cubic computational complexity growth with the number of samples has been a long standing challenge. Training requires the inversion of $N \times N$N×N kernel at every iteration, whereas regression needs computation of an $m \times N$m×N kernel, where $N$N and $m$m are the number of training and test points, respectively. This work demonstrates how approximating the covariance kernel using eigenvalues and functions leads to an approximate Gaussian process with significant reduction in training and regression complexity. Training now requires computing only an $N \times n$N×n eigenfunction matrix and an $n \times n$n×n inverse, where $n$n is a selected number of eigenvalues. Furthermore, regression now only requires an $m \times n$m×n matrix. Finally, in a special case, the hyperparameter optimization is completely independent from the number of training samples. The proposed method can regress over multiple outputs, learn the correlations between them, and estimate their derivatives to any order. The computational complexity reduction, regression capabilities, multioutput correlation learning, and comparison to the state of the art are demonstrated in simulation examples. Finally we show how the proposed approach can be utilized to model real human data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call