Abstract

The Generalized Likelihood Uncertainty Estimation (GLUE) method has been thrived for decades, huge number of applications in the field of hydrological model have proved its effectiveness in uncertainty and parameter estimation. However, for many years, the poor computational efficiency of GLUE hampers its further applications. A feasible way to solve this problem is the integration of modern CPU-GPU hybrid high performance computer cluster technology to accelerate the traditional GLUE method. In this study, we developed a CPU-GPU hybrid computer cluster-based highly parallel large-scale GLUE method to improve its computational efficiency. The Intel Xeon multi-core CPU and NVIDIA Tesla many-core GPU were adopted in this study. The source code was developed by using the MPICH2, C++ with OpenMP 2.0, and CUDA 6.5. The parallel GLUE method was tested by a widely-used hydrological model (the Xinanjiang model) to conduct performance and scalability investigation. Comparison results indicated that the parallel GLUE method outperformed the traditional serial method and have good application prospect on super computer clusters such as the ORNL Summit and Sierra of the TOP500 super computers around the world.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call