Abstract
Spatial cross-matching operation over geospatial polygonal datasets is important to a variety of GIS applications. However, it involves extensive computation cost associated with intersection and union of a geospatial polygon pair from large scale datasets. This mandates for exploration of parallel computing capabilities such as GPU to increase the efficiency of such operations. In this paper, we present a CPU-GPU hybrid platform to accelerate the cross-matching operation of geospatial datasets. The computing tasks are dynamically scheduled to be executed either on CPU or GPU. To accommodate geospatial datasets processing on GPU using pixelization approach, we convert the floating point-valued vertices into integer-valued vertices with an adaptive scaling factor as a function of area of minimum bounding box. We test our framework over Natural Earth Dataset and achieve 10x speedup on NVIDIA GeForce GTX750 GPU and 14x speedup on Tesla K80 GPU over 280,000 polygon pairs in one tile and 400 tiles in total. We also investigate the effects of input data size to the IO / computation ratio and note that the sufficiently large input data size is required to better utilize the computing power of GPU. Finally, with comparison between two GPUs, our results demonstrate that the efficient cross-matching comparison can be achieved with a cost-effective GPU.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.