Abstract

In this work we examine a posteriori error control for post-processed approximations to elliptic boundary value problems. We introduce a class of post-processing operator that “tweaks” a wide variety of existing post-processing techniques to enable efficient and reliable a posteriori bounds to be proven. This ultimately results in optimal error control for all manner of reconstruction operators, including those that superconverge. We showcase our results by applying them to two classes of very popular reconstruction operators, the Smoothness-Increasing Accuracy-Conserving filter and superconvergent patch recovery. Extensive numerical tests are conducted that confirm our analytic findings.

Highlights

  • Post-Processing techniques are often used in numerical simulations for a variety of reasons from visualisation purposes [4] to designing superconvergent approximations [5] through to becoming fundamental building blocks in constructing numerical schemes [6,12,13]

  • We study the numerical behaviour of the error indicators proposed for the Smoothness Increasing Accuracy Conserving (SIAC) and superconvergent patch recovery (SPR) post-processing operators

  • U = 0 on ∂, where the forcing function f is chosen so that the exact solution is u(x) = sin (6π x)2 cos 9 π x on the interval (0, 1). We show both the L2 and H1 errors for the Galerkin approximation uh, the SIAC postprocessed approximation, u∗ and the orthogonal postprocessor, u∗∗

Read more

Summary

Introduction

Post-Processing techniques are often used in numerical simulations for a variety of reasons from visualisation purposes [4] to designing superconvergent approximations [5] through to becoming fundamental building blocks in constructing numerical schemes [6,12,13] Another application of these operators is that they are a very useful component in the a posteriori analysis for approximations of partial differential equations (PDEs) [2,33]. In many cases, u∗ is expected to be a better approximation This raises a natural question: If an adaptive algorithm computes (on any given mesh) uh and u∗ and u∗ is a better approximation of u than uh is, why is uh and not u∗ considered as the “primary” approximation of u? We aim to provide reliable and efficient error control for u − u∗

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call