Abstract
In video snapshot compressive imaging (SCI) systems, video reconstruction methods are used to recover spatial–temporal-correlated video frame signals from a compressed measurement. While unfolding methods have demonstrated promising performance, they encounter two challenges: (1) They lack the ability to estimate degradation patterns and the degree of ill-posedness from video SCI, which hampers guiding and supervising the iterative learning process. (2) The prevailing reliance on 3D-CNNs in these methods limits their capacity to capture long-range dependencies. To address these concerns, this paper introduces the Degradation-Aware Deep Unfolding Network (DADUN). DADUN leverages estimated priors from compressed frames and the physical mask to guide and control each iteration. We also develop a novel Bidirectional Propagation Convolutional Recurrent Neural Network (BiP-CRNN) that simultaneously captures both intra-frame contents and inter-frame dependencies. By plugging BiP-CRNN into DADUN, we establish a novel end-to-end (E2E) and data-dependent deep unfolding method, DADUN with transformer prior (TP), for video sequence reconstruction. Experimental results on various video sequences show the effectiveness of our proposed approach, which is also robust to random masks and has wide generalization bounds.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have