Abstract
Deep learning algorithms have been widely used to solve linear Kolmogorov partial differential equations (PDEs) in high dimensions, where the loss function is defined as a mathematical expectation. We propose to use the randomized quasi-Monte Carlo (RQMC) method instead of the Monte Carlo (MC) method for computing the loss function. In theory, we decompose the error from empirical risk minimization (ERM) into the generalization error and the approximation error. Notably, the approximation error is independent of the sampling methods. We prove that the convergence order of the mean generalization error for the RQMC method is O(n−1+ϵ) for arbitrarily small ϵ>0, while for the MC method it is O(n−1/2+ϵ) for arbitrarily small ϵ>0. Consequently, we find that the overall error for the RQMC method is asymptotically smaller than that for the MC method as n increases. Our numerical experiments show that the algorithm based on the RQMC method consistently achieves smaller relative L2 error than that based on the MC method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.