Abstract

Algorithmic differentiation (AD) is a tool for generating discrete adjoint solvers, which efficiently compute gradients of functions with many inputs, for example for use in gradient-based optimization. AD is often applied to large computations such as stencil operators, which are an important part of most structured-mesh PDE solvers. Stencil computations are often parallelized, for example by using OpenMP, and optimized by using techniques such as cache-blocking and tiling to fully utilize multicore CPUs and many-core accelerators and GPUs. Differentiating these codes with conventional reverse-mode AD results in adjoint codes that cannot be expressed as stencil operations and may not be easily parallelizable. They thus leave most of the compute power of modern architectures unused. We present a method that combines forward-mode AD and loop transformation to generate adjoint solvers that use the same memory access pattern as the original computation that they are derived from and can benefit from the same optimization techniques. The effectiveness of this method is demonstrated by generating a scalable adjoint CFD solver for multicore CPUs and Xeon Phi accelerators.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call