Abstract

The proliferation of lidar technology in remote sensing has resulted in extremely large, high resolution point clouds covering a wide variety of terrain. Constructing a grid digital elevation model (DEM) from these large data sets requires extensive computational resources and ample disk space. We propose a framework for leveraging modern computing resources including multi-core distributed systems and general purpose GPU computing to reduce computational bottlenecks and accelerate DEM construction. We employ an I/O-efficient strategy using quad trees to automatically partition the lidar point clouds into a set of independent work bundles. We then distribute these work bundles to multiple GPU-equipped hosts which independently interpolate a portion of the DEM and return partial results. Finally, we gather the partial results and assemble the final DEM I/O-efficiently. Our approach balances I/O, computation, and network communication to reduce bottlenecks. Experimental results show that our approach scales linearly with the number of compute hosts, and achieves speed-ups of 25 × or greater using GPU computing. These results make it practical to use more complex interpolation methods such as regularized splines with tension, which provide geomorphological advantages over simpler interpolation methods such as linear interpolation, nearest neighbor interpolation, or natural neighbor interpolation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.