Abstract

Recovering a function or high-dimensional parameter vector from indirect measurements is a central task in various scientific areas. Several methods for solving such inverse problems are well developed and well understood. Recently, novel algorithms using deep learning and neural networks for inverse problems appeared. While still in their infancy, these techniques show astonishing performance for applications like low-dose CT or various sparse data problems. However, there are few theoretical results for deep learning in inverse problems. In this paper, we establish a complete convergence analysis for the proposed NETT (network Tikhonov) approach to inverse problems. NETT considers nearly data-consistent solutions having small value of a regularizer defined by a trained neural network. We derive well-posedness results and quantitative error estimates, and propose a possible strategy for training the regularizer. Our theoretical results and framework are different from any previous work using neural networks for solving inverse problems. A possible data driven regularizer is proposed. Numerical results are presented for a tomographic sparse data problem, which demonstrate good performance of NETT even for unknowns of different type from the training data. To derive the convergence and convergence rates results we introduce a new framework based on the absolute Bregman distance generalizing the standard Bregman distance from the convex to the non-convex case.

Highlights

  • We study the stable solution of inverse problems of the formEstimate x ∈ D from data yδ = F(x) + ξδ . (1.1)Here F : D ⊆ X → Y is a possibly non-linear operator between reflexive Banach spaces (X, · ) and (Y, · ) with domain D

  • In this paper we developed a new framework for the solution of inverse problems via network Tikhonov (NETT) (1.3)

  • We presented a complete convergence analysis and derived well-posedness and weak convergence (Theorem 2.6), norm-convergence (Theorem 2.11), as well as various convergence rates results

Read more

Summary

Introduction

We study the stable solution of inverse problems of the form. Estimate x ∈ D from data yδ = F(x) + ξδ . F : D ⊆ X → Y is a possibly non-linear operator between reflexive Banach spaces (X, · ) and (Y, · ) with domain D. We allow a possibly infinite-dimensional function space setting, but clearly the approach and results apply to a finite dimensional setting as well. The element ξδ ∈ Y models the unknown data error (noise) which is assumed to satisfy the estimate ξδ ≤ δ for some noise level δ ≥ 0. For its stable solution one has to employ regularization methods, which are based on approximating (1.1) by neighboring well-posed problems that enforce stability and uniqueness

Objectives
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.