Abstract

We prove new error estimates for the Longstaff–Schwartz algorithm. We establish an \(O(\log^{\frac{1}{2}}(N)N^{-\frac{1}{2}})\) convergence rate for the expected L2 sample error of this algorithm (where N is the number of Monte Carlo sample paths), whenever the approximation architecture of the algorithm is an arbitrary set of L2 functions with finite Vapnik–Chervonenkis dimension. Incorporating bounds on the approximation error as well, we then apply these results to the case of approximation schemes defined by finite-dimensional vector spaces of polynomials as well as that of certain nonlinear sets of neural networks. We obtain corresponding estimates even when the underlying and payoff processes are not necessarily almost surely bounded. These results extend and strengthen those of Egloff (Ann. Appl. Probab. 15, 1396–1432, 2005), Egloff et al. (Ann. Appl. Probab. 17, 1138–1171, 2007), Kohler et al. (Math. Finance 20, 383–410, 2010), Glasserman and Yu (Ann. Appl. Probab. 14, 2090–2119, 2004), Clement et al. (Finance Stoch. 6, 449–471, 2002) as well as others.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call