Abstract

The offline time of the reduced basis method can be very long given a large training set of parameter samples. This usually happens when the system has more than two independent parameters. On the other hand, if the training set includes fewer parameter samples, the greedy algorithm might produce a reduced-order model with large errors at the samples outside of the training set. We introduce a method based on a surrogate error model to efficiently sample the parameter domain such that the training set is adaptively updated starting from a coarse set with a small number of parameter samples. A sharp a posteriori error estimator is evaluated on a coarse training set. Radial Basis Functions are used to interpolate the error estimator over a separate fine training set. Points from the fine training set are added into the coarse training set at every iteration based on a user defined criterion. In parallel, parameter samples satisfying a defined tolerance are adaptively removed from the coarse training set. The approach is shown to avoid high computational costs by using a small training set and to provide a reduced-order model with guaranteed accuracy over a fine training set. Further, we show numerical evidence that the reduced-order model meets the defined tolerance over an independently sampled test set from the parameter domain.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.