Abstract

In this study, a CUDA Fortran-based GPU-accelerated Laplace equation model was developed and applied to several cases. The Laplace equation is one of the equations that can physically analyze the groundwater flows, and is an equation that can provide analytical solutions. Such a numerical model requires a large amount of data to physically regenerate the flow with high accuracy, and requires computational time. These numerical models require a large amount of data to physically reproduce the flow with high accuracy and require computational time. As a way to shorten the computation time by applying CUDA technology, large-scale parallel computations were performed on the GPU, and a program was written to reduce the number of data transfers between the CPU and GPU. A GPU consists of many ALUs specialized in graphic processing, and can perform more concurrent computations than a CPU using multiple ALUs. The computation results of the GPU-accelerated model were compared with the analytical solution of the Laplace equation to verify the accuracy. The computation results of the GPU-accelerated Laplace equation model were in good agreement with the analytical solution. As the number of grids increased, the computational time of the GPU-accelerated model gradually reduced compared to the computational time of the CPU-based Laplace equation model. As a result, the computational time of the GPU-accelerated Laplace equation model was reduced by up to about 50 times.

Highlights

  • In the field of Computational Fluid Dynamics (CFD), research to realistically express computational fluid results based on improvements in computer performance is actively being conducted

  • Vanderbauwhede and Takemi [5] developed an OpenCL-based Weather Research and Forecasting (WRF) model, and the results showed that the computational execution time of the model using the Graphic Processing Unit (GPU) was reduced by two times compared to the case of the model using the Central Processing Unit (CPU)

  • In order to shorten the computational execution time for the numerical analysis of Laplace equations, a GPU-accelerated Laplace equation model was implemented to verify the accuracy of the numerical analysis results and to confirm the performance of the computational execution time

Read more

Summary

Introduction

In the field of Computational Fluid Dynamics (CFD), research to realistically express computational fluid results based on improvements in computer performance is actively being conducted. Such research results require a large amount of data to physically regenerate the flow with high accuracy, and require significant computational time to process data. Only a Central Processing Unit (CPU) was used to perform a computation on a large amount of data; since about 20 years ago, a technique for using a Graphic Processing Unit (GPU) has been developed and used [1]. GPUs have emerged as a viable, inexpensive and highly portable alternative to large and expensive high-performance computing clusters [2]. GPUs require programming in a different way from a CPU when performing data computations due to differences in the CPU and hardware. GPU manufacturer NVIDIA has developed a compiler that can use the GPU for data computation in collaboration with the compiler supplier, the Portland Group Incorporated (PGI). CUDA Fortran, is a compiler incorporating CUDA and PGI Fortran

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call