Abstract

While many statistical approaches have tackled the problem of large spatial datasets, the issues arising from costly data movement and data storage have long been set aside. Having easy access to the data has been taken for granted and is now becoming an important bottleneck in the performance of statistical inference. As the availability of high resolution spatial data continues to grow, the need to develop efficient modeling techniques that leverage multi-processor and multi-storage capabilities is becoming a priority. To that end, the development of a distributed method to implement Nearest-Neighbor Gaussian Process (NNGP) models for spatial interpolation and inference for large datasets is of interest. The proposed framework retains the exact implementation of the NNGP while allowing for distributed or sequential computation of the posterior inference. The method allows for any choice of grouping of the data whether it is at random or by region. As a result of this new method, the NNGP model can be implemented with an even split of the computation burden with minimum overload at the master node level.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.