Abstract
Lagrangian interpolation is a classical way to approximate general functions by finite sums of well chosen, pre-defined, linearly independent interpolating functions; it is much simpler to implement than determining the best fits with respect to some Banach (or even Hilbert) norms. In addition, only partial knowledge is required (here values on some set of points). The problem of defining the best sample of points is nevertheless rather complex and is in general open. In this paper we propose a way to derive such sets of points. We do not claim that the points resulting from the construction explained here are optimal in any sense. Nevertheless, the resulting interpolation method is proven to work under certain hypothesis, the process is very general and simple to implement, and compared to situations where the best behavior is known, it is relatively competitive.
Highlights
The extension of the reduced basis technique [8, 13, 15, 22, 24, 14] to nonlinear partial differential equations has led us to introduce an “empirical Lagrangian interpolation” method on a finite dimensional vectorial space spanned by functions that can be of any type
The following result, extends to the interpolation process the proof in [3] for the best approximation. It makes much more precise the previous lemma, since it allows to state that even though we do not know finite dimensional spaces — candidates for achieving the minimal distance in the n-width — the greedy process for the magic points provides spaces that give an upper bound for the right hand side in (9)
We have presented a general multipurpose interpolation method for selecting interpolation points which we dub “magic points”
Summary
The extension of the reduced basis technique [8, 13, 15, 22, 24, 14] to nonlinear partial differential equations has led us to introduce an “empirical Lagrangian interpolation” method on a finite dimensional vectorial space spanned by functions that can be of any type (see [1, 7]). Exponential small n-width is achieved when analyticity exists in the parameter dependency Another possibility, that we encounter in the reduced basis framework is given by U = {u(μ, ·), μ ∈ D}, where, D is a given (infinite) set of parameters (either in IRp or even in some functional space of continuous functions). We wish to stress that the applicability of the procedure is not limited to examples we have included in this paper; on the contrary, the procedure may prove advantageous in a variety of applications, for example image or data compression involving domains of irregular profile, fast rendering and visualization in animation, the development of computer simulation surrogates or experimental response surface for design and optimization, and the determination of a good numerical integration scheme for smooth functions on irregular domains For another approach to approximating parameterized fields, in particular an optimization–based approach well-suited to noisy data or constrained systems, see [16]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.