Abstract

We establish error estimates for the Longstaff–Schwartz algorithm, employing just a single set of independent Monte Carlo sample paths that is reused for all exercise time steps. We obtain, within the context of financial derivative payoff functions bounded according to the uniform norm, new bounds on the stochastic part of the error of this algorithm for an approximation architecture that may be any arbitrary set of L2 functions of finite Vapnik–Chervonenkis (VC) dimension whenever the algorithm’s least-squares regression optimization step is solved either exactly or approximately. Moreover, we show how to extend these estimates to the case of payoff functions bounded only in Lp, p a real number greater than [Formula: see text]. We also establish new overall error bounds for the Longstaff–Schwartz algorithm, including estimates on the approximation error also for unconstrained linear, finite-dimensional polynomial approximation. Our results here extend those in the literature by not imposing any uniform boundedness condition on the approximation architectures, allowing each of them to be any set of L2 functions of finite VC dimension and by establishing error estimates as well in the case of ɛ-additive approximate least-squares optimization, ɛ greater than or equal to 0.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call