Abstract
Deep kernel learning (DKL) leverages the connection between the Gaussian process (GP) and neural networks (NNs) to build an end-to-end hybrid model. It combines the capability of NN to learn rich representations under massive data and the nonparametric property of GP to achieve automatic regularization that incorporates a tradeoff between model fit and model complexity. However, the deterministic NN encoder may weaken the model regularization of the following GP part, especially on small datasets, due to the free latent representation. We, therefore, present a complete deep latent-variable kernel learning (DLVKL) model wherein the latent variables perform stochastic encoding for regularized representation. We further enhance the DLVKL from two aspects: 1) the expressive variational posterior through neural stochastic differential equation (NSDE) to improve the approximation quality and 2) the hybrid prior taking knowledge from both the SDE prior and the posterior to arrive at a flexible tradeoff. Extensive experiments imply that DLVKL-NSDE performs similar to the well-calibrated GP on small datasets, and shows superiority on large datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.