Abstract

In this paper, we design a novel algorithm based on least-squares Monte-Carlo (LSMC) in order to approximate the solution of discrete time backward stochastic differential equations (BSDEs). Our algorithm allows massive parallelization of the computations on many core processors such as graphics processing units (GPUs). Our approach consists of a novel method of stratification which appears to be crucial for large scale parallelization. In this way, we minimize the exposure to the memory requirements due to the storage of simulations. Indeed, we note the lower memory overhead of the method compared with previous works.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call