Abstract

AbstractLarge‐scale Gaussian process (GP) regression is infeasible for large training data due to cubic scaling of flops and quadratic storage involved in working with covariance matrices. Remedies in recent literature focus on divide‐and‐conquer, for example, partitioning into subproblems and inducing functional (and thus computational) independence. Such approximations can be speedy, accurate, and sometimes even more flexible than ordinary GPs. However, a big downside is loss of continuity at partition boundaries. Modern methods like local approximate GPs (LAGPs) imply effectively infinite partitioning and are thus both good and bad in this regard. Model averaging, an alternative to divide‐and‐conquer, can maintain absolute continuity but often over‐smooths, diminishing accuracy. Here we propose putting LAGP‐like methods into a local experts‐like framework, blending partition‐based speed with model‐averaging continuity, as a flagship example of what we call precision aggregated local models (PALM). Using LAGPs, each selecting from total data pairs, our scheme is at most cubic in , quadratic in , and linear in . Extensive empirical illustration shows how PALM is at least as accurate as LAGP, can be much faster, and furnishes continuous predictions. Finally, we propose sequential updating scheme that greedily refines a PALM predictor up to a computational budget.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.