Abstract

It is widely acknowledged that neural networks can approximate any continuous (even measurable) functions between finite-dimensional Euclidean spaces to arbitrary accuracy. Recently, the use of neural networks has started emerging in infinite-dimensional settings. Universal approximation theorems of operators guarantee that neural networks can learn mappings between infinite-dimensional spaces. In this paper, we propose a neural network-based method (BasisONet) capable of approximating mappings between function spaces. To reduce the dimension of an infinite-dimensional space, we propose a novel function autoencoder that can compress the function data. Our model can predict the output function at any resolution using the corresponding input data at any resolution once trained. Numerical experiments demonstrate that the performance of our model is competitive with existing methods on the benchmarks, and our model can address the data on a complex geometry with high precision. We further analyze some notable characteristics of our model based on the numerical results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.