Abstract

When approximating functions defined on some domain Ω⊂Rd, standard tensor product splines reveal sub-optimal behavior, in particular, if Ω is non-convex. As an alternative, we suggest a natural diversification strategy for the B-spline basis {Bi}i. It is grounded on employing a separate copy Bi,γ of Bi for every connected component γ of its support suppBi∩Ω. In the bivariate case, which is important for applications, this process enhances the spline space to a crucial extent. Concretely, we prove that the error in uniform tensor product spline approximation of a function f:R2⊃Ω→R can be bounded in terms of the pure partial derivatives of f, where the constant depends neither on the shape of Ω nor on the knot grid. An example shows that a similar result cannot hold true for higher dimensions, even if the domain is convex and has a smooth boundary.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call