Abstract

We discuss the possibility of learning a data-driven explicit model correction for inverse problems and whether such a model correction can be used within a variational framework to obtain regulari...

Highlights

  • In inverse problems it is usually considered imperative to have an accurate forward model of the underlying physics

  • In this paper we investigate the possibility of correcting such approximation errors explicitly with data-driven methods, in particular, using a convolutional neural network (CNN)

  • We introduce a forward-adjoint correction that combines an explicit forward model correction with an explicit correction of the adjoint

Read more

Summary

Introduction

In inverse problems it is usually considered imperative to have an accurate forward model of the underlying physics. As we will discuss in this study, while it is fairly easy to learn a model correction that fulfils (1.4), it cannot be readily guaranteed to yield high-quality reconstructions when used within the variational problem (1.5) This is a conceptual difficulty caused by a possible discrepancy in the range of the adjoints of A and A\widetil that can be an inherent part of the approximate model and first order methods to solve (1.5) yield nondesirable results. We note that in many cases we cannot hope to find a uniform model correction, but that correcting the model error can be still attempted using the notion of learned correction, quantified by (2.10) This is possible even if the operators A and A\widetil are exhibiting different kernel spaces, as long as the training set \{ xi, i = 1, . This makes nonlinear corrections considerably more powerful in correcting model errors than their linear counterparts

A toy case
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call