Abstract

SummaryWe present a novel implementation of the modal DG method for hyperbolic conservation laws in two dimensions on graphics processing units (GPUs) using NVIDIA's Compute Unified Device Architecture. Both flexible and highly accurate, DG methods accommodate parallel architectures well as their discontinuous nature produces element‐local approximations. High‐performance scientific computing suits GPUs well, as these powerful, massively parallel, cost‐effective devices have recently included support for double‐precision floating‐point numbers. Computed examples for Euler equations over unstructured triangle meshes demonstrate the effectiveness of our implementation on an NVIDIA GTX 580 device. Profiling of our method reveals performance comparable with an existing nodal DG‐GPU implementation for linear problems. Copyright © 2014 John Wiley & Sons, Ltd.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call