Abstract

Deep unfolding methods have gained significant popularity in the field of inverse problems as they have driven the design of deep neural networks (DNNs) using iterative algorithms. In contrast to general DNNs, unfolding methods offer improved interpretability and performance. However, their theoretical stability or regularity in solving inverse problems remains subject to certain limitations. To address this, we reevaluate unfolded DNNs and observe that their algorithmically-driven cascading structure exhibits a closer resemblance to iterative regularization. Recognizing this, we propose a modified training approach and configure termination criteria for unfolded DNNs, thereby establishing the unfolding method as an iterative regularization technique. Specifically, our method involves the joint learning of a convex penalty function using an input-convex neural network to quantify distance to a real data manifold. Then, we train a DNN unfolded from the proximal gradient descent algorithm, incorporating this learned penalty. Additionally, we introduce a new termination criterion for the unfolded DNN. Under the assumption that the real data manifold intersects the solutions of the inverse problem with a unique real solution, even when measurements contain perturbations, we provide a theoretical proof of the stable convergence of the unfolded DNN to this solution. Furthermore, we demonstrate with an example of magnetic resonance imaging reconstruction that the proposed method outperforms original unfolding methods and traditional regularization methods in terms of reconstruction quality, stability, and convergence speed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call