Abstract

We introduce a data driven and model free approach for computing conditional expectations. The new method is based on classical techniques combined with machine learning methods. In particular, we consider kernel density estimation based on simulated risk factors combined with a control variate. This is used in a Gaussian process regression for finally approximating the conditional expectation. In this way we increase not only the stability of the estimator but we also need a significantly lower amount of simulations due to the variance reduction. Since we apply Gaussian process regression, we do not only get a point estimate, but also the full distribution. It turns out that the optimal coefficient for the control variate is the minimal variance delta. Thus, in this way we obtain model free and purely data driven hedges. Finally, we apply our method to several examples from option pricing including exotic option payoffs and payoffs with multiple underlyings for different models including the rough Bergomi model. A discussion on the challenges to extend the method to a large dimensional settings is provided and is partially solved by using Quasi random number sequences.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.