Abstract

This work aims to study techniques for parallel computing using GPU (Graphics Processing Unit) in order to optimize the performance of a fragment of computational code, implemented as a Dataflow system, which is part of a meteorological numerical model responsible for calculating the advection transportation phenomena. The possible algorithm limitations for GPU efficiency will also be addressed through an extensive code instrumentation. Considering the difficulties found on the original algorithm which implies a GPU accelerated code dealing with flow dependencies and coarse grain parallelism, the performance gain with GPU may be considered fair.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call