Abstract
In this paper, we discuss our approach on the GPU implementation of the Discontinuous Galerkin Time-Domain (DGTD) method to solve the time dependent Maxwell's equations. We exploit the inherent DGTD parallelism and combine the GPU computing capabilities with the benefits of a local time-stepping strategy. The combination results in significant increase in efficiency and reduction of the computational time, especially for multi-scale applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have