Abstract
Abstract Deep operator network (DeepONet) has been proven to be highly successful in operator learning tasks.
Theoretical analysis indicates that the generation error of DeepONet should decrease as the basis dimension increases, 
thus providing a systematic way to reduce its generalization errors by varying the network hyperparameters.
However, in practice, we found that, depending on the problem being solved and the activation function used, 
the generalization errors fluctuate unpredictably, contrary to theoretical expectations. 
Upon analyzing the output matrix of the trunk net, we determined that this behavior stems from the learned basis 
functions being highly linearly dependent, which limits the expressivity of the vanilla DeepONet.
To address these limitations, we propose QR-DeepONet, an enhanced version of DeepONet using QR decomposition.
These modifications ensured that the learned basis functions were linearly independent and orthogonal to each other.
The numerical results demonstrate that the generalization errors of QR-DeepONet follow theoretical predictions that decrease monotonically as the basis dimension increases and outperform vanilla DeepONet.
Consequently, the proposed method successfully fills the gap between the theory and practice.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.