Abstract

Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the “tape”. Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5±4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography. Program summaryProgram title: AD-GPUCatalogue identifier: AEYX_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEYX_v1_0.htmlProgram obtainable from: CPC Program Library, Queen’s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 16715No. of bytes in distributed program, including test data, etc.: 143683Distribution format: tar.gzProgramming language: C++ and CUDA.Computer: Any computer with a compatible C++ compiler and a GPU with CUDA capability 3.0 or higher.Operating system: Windows 7 or Linux.RAM: 16 GbyteClassification: 4.9, 4.12, 6.1, 6.5.External routines: CUDA 6.5, Intel MKL (optional) and routines from BLAS, LAPACK and CUBLASNature of problem: Gradients are required for many optimization problems, e.g. classifier training or nonlinear image reconstruction. Often, the function, of which the gradient is required, can be implemented as a computer program. Then, algorithmic differentiation methods can be used to compute the gradient. Depending on the approach this may result in excessive requirements of computational resources, i.e. memory and arithmetic computations. GPUs provide massive computational resources but require special considerations to distribute the workload onto many light-weight threads.Solution method: Adjoint algorithmic differentiation allows efficient computation of gradients of cost functions given as computer programs. The gradient can be theoretically computed using a similar amount of arithmetic operations as one function evaluation. Optimal usage of parallel processors and limited memory is a major challenge which can be mediated by the use of vectorization.Restrictions: To use the GPU-accelerated adjoint algorithmic differentiation method, the cost function must be implemented using the provided AD-GPU intrinsics for matrix and vector operations. Unusual features:GPU-acceleration.Additional comments: The code uses some features of C++11, e.g. std::shared ptr. Alternatively, the boost library can be used.Running time: The time to run the example program is a few minutes or up to a few hours to reproduce the performance measurements.

Highlights

  • IntroductionAssuming a run time of the original simulation of 1 min, the computation of a gradient of size n = 106 would take between 1 and 100 min in adjoint Algorithmic DifferentiationAlgorithmic Differentiation (AD) mode compared to at least 106 + 1 min (almost two years) when using finite difference approximation or tangent (: forward) mode AD

  • 7.5 ± 4.4, showing that the processing power of Graphics Processing Units (GPUs) can be utilized for Adjoint algorithmic differentiation (AAD) using this concept. We show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography

  • The aim of this study is to provide a software for GPU-accelerated AAD and show its value using performance measurements

Read more

Summary

Introduction

Assuming a run time of the original simulation of 1 min, the computation of a gradient of size n = 106 would take between 1 and 100 min in adjoint AD mode compared to at least 106 + 1 min (almost two years) when using finite difference approximation or tangent (: forward) mode AD This run time will most likely turn out to be prohibitive, rendering the use of gradient-based optimization techniques infeasible unless adjoint mode AD is available. Gradients of this size are very common, for example, in computational fluid dynamics simulations run frequently in the atmospheric sciences and in automotive or aircraft design

Objectives
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.