Abstract

We present three adaptive techniques to improve the computational performance of deep neural network (DNN) methods for high-dimensional partial differential equations (PDEs). They are adaptive choice of the loss function, adaptive activation function, and adaptive sampling, all of which will be applied to the training process of a DNN for PDEs. Several numerical experiments have shown that our adaptive techniques significantly improve the computational accuracy and accelerate the convergence speed with no need to increase the number of the layers or the number of neurons of a DNN. In particular, even for some 50-dimensional problems, the relative errors of our algorithm can achieve the order of O(10−4).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.