Abstract

In this paper we re-formulate the automatic differentiation (and in particular, the backward automatic differentiation, also known as adjoint automatic differentiation, AAD) for random variables. While this is just a formal re-interpretation it allows to investigate the algorithms in the presence of stochastic operators like expectation, conditional expectation or indicator functions. We then specify the algorithms to efficiently incorporate (conditional) expectation operators without the need to differentiate an approximation of the (conditional) expectation. Under a comparably mild assumption it is possible to retain the simplicity of the backward automatic differentiation algorithm in the presence of (conditional) expectation operators. This simplifies important applications like - in mathematical finance - the application of backward automatic differentiation to the valuation of Bermudan options or calculation of xVA's. In addition, the framework allows to dramatically reduce the memory requirements and improve the performance of a tapeless implementation of automatic differentiation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call