Abstract

The Weather Research and Forecasting (WRF) model has been widely employed for weather prediction and atmospheric simulation with dual purposes in forecasting and research. Land-surface models (LSMs) are parts of the WRF model, which is used to provide information of heat and moisture fluxes over land and sea-ice points. The 5-layer thermal diffusion simulation is an LSM based on the MM5 soil temperature model with an energy budget made up of sensible, latent, and radiative heat fluxes. Owing to the feature of no interactions among horizontal grid points, the LSMs are very favorable for massively parallel processing. The study presented in this article demonstrates the parallel computing efforts on the WRF 5-layer thermal diffusion scheme using Graphics Processing Unit (GPU). Since this scheme is only one intermediate module of the entire WRF model, the involvement of the I/O transfer does not occur in the intermediate process. By employing one NVIDIA GTX 680 GPU in the case without I/O transfer, our optimization efforts on the GPU-based 5-layer thermal diffusion scheme can reach a speedup as high as 247.5x with respect to one CPU core, whereas the speedup for one CPU socket with respect to one CPU core is only 3.1x. We can even boost the speedup to 332x with respect to one CPU core when three GPUs are applied.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call