Abstract

The present paper studies so-called deep image prior (DIP) techniques in the context of ill-posed inverse problems. DIP networks have been recently introduced for applications in image processing; also first experimental results for applying DIP to inverse problems have been reported. This paper aims at discussing different interpretations of DIP and to obtain analytic results for specific network designs and linear operators. The main contribution is to introduce the idea of viewing these approaches as the optimization of Tikhonov functionals rather than optimizing networks. Besides theoretical results, we present numerical verifications.

Highlights

  • Deep image priors (DIP) were recently introduced in deep learning for some tasks in image processing [19]

  • We examine the analytic deep image prior utilizing the proximal gradient descent approach to compute x(B)

  • We investigated the concept of deep inverse priors/regularization by architecture

Read more

Summary

Introduction

Deep image priors (DIP) were recently introduced in deep learning for some tasks in image processing [19]. Common choices for A are the identity operator (denoising) or a projection operator to a subset of the image domain (inpainting) For these applications, it has been observed that minimizing the functional iteratively by gradient descent methods in combination with a suitable stopping criterion leads to amazing results [19]. We aim at analyzing a specific network architecture φΘ and at interpreting the resulting DIP approach as a regularization technique in the functional analytical setting, and at proving convergence properties for the minimizers of (1.1). We present different mathematical interpretations of DIP approaches, and we analyze two network designs in the context of inverse problems in more detail It is organized as follows: In Sect. We exemplify our theoretical findings with numerical examples for the standard linear integration operator

The Deep Prior Approach
Deep Prior and Unrolled Proximal Gradient Architectures
Deep Prior Architectures and Interpretations
A Trivial Architecture
Two Perspectives Based on Regression
The Bayesian Point of View
Deep Priors and Tikhonov Functionals
Unrolled Proximal Gradient Networks as Deep Priors for Inverse Problems
Example
Constrained System of Singular Functions
Method
Numerical Experiments
Summary and Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call