Abstract

In the traditional Gaussian process regression (GPR), covariance matrix is modeled by a kernel function, which is dominated by a set of hyper-parameters. However, the estimation of such hyper-parameters are generally a highly nonconvex optimization problem, which imposes computational difficulties and undermines the practical performance. To improve the prediction accuracy, we propose in this paper a novel GPR algorithm that introduces the estimate of precision matrix of target values. Covariance and precision matrices are coupled by a regularized approximation error term. In practice, the precision matrix and hyper-parameters are trained by the alternating optimization. Experimental results demonstrate that the performance of the joint-learning formulation is superior to traditional GPR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call