Abstract

In order to find a solution for accurate, topographic data-demanding applications, such as catchment hydrologic modeling and assessments of anthropic activities impact on environmental systems, high-accuracy surface modeling (HASM) method is developed. Although it can produce a digital elevation model (DEM) surface of higher accuracy than classical methods, e.g. inverse distance weighted, spline and kriging, HASM requires numerous iterations to solve large linear systems, which impede its applications in high-resolution and large-scale surface interpolation. This paper aims to demonstrate the utilization of graphics’ processing units (GPUs) device to accelerate HASM in constructing large-scale and high-resolution DEM surfaces. We parallelized the linear system algorithm for solving HASM with Compute Unified Device Architecture, a parallel programming model developed by NVIDIA. We designed a memory-saving strategy to enable the HASM algorithm to run on GPUs. The speedup ratio of GPU-based algorithm was tested and compared with CPU-based algorithm through simulations of both ideal Gaussian synthetic surface and real topographic surface in the loess plateau of Gansu province. The GPU-parallelized algorithm can attain an over 10× speedup ratio with the CPU-based algorithm as a reference. The speedup ratio increases with the scale and resolution of the dataset. The memory management strategy efficiently reduces the memory usage by more than eight times the grid cell number. Implementing HASM in the GPUs device enables modeling large-scale and high-resolution surfaces in a reasonable time period and implies the potential benefits from the use of GPUs as massive, parallel co-processors for arithmetic-intensive data-processing applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call