Abstract

Bayesian knowledge transfer in supervised learning scenarios often relies on a complete specification and optimization of the stochastic dependence between source and target tasks. This is a critical requirement of completely modelled settings, which can often be difficult to justify. We propose a strategy to overcome this. The methodology relies on fully probabilistic design to develop a target algorithm which accepts source knowledge in the form of a probability distribution. We present this incompletely modelled setting in the supervised learning context where the source and target tasks are to perform Gaussian process regression. Experimental evaluation demonstrates that the transfer of the source distribution substantially improves prediction performance of the target learner when recovering a distorted nonparametric function realization from noisy data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.