Abstract

Neural networks functions are supposed to be able to encode the desired solution of an inverse problem very efficiently. In this paper, we consider the problem of solving linear inverse problems with neural network coders. First we establish some correspondences of this formulation with existing concepts in regularization theory, in particular with state space regularization, operator decomposition and iterative regularization methods. A Gauss–Newton method is suitable for solving encoded linear inverse problems, which is supported by a local convergence result. The convergence studies, however, are not complete, and are based on a conjecture on linear independence of activation functions and its derivatives. Some numerical experiments are presented to support the theoretical findings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call