Abstract
Deep compressive sensing (CS) has become a prevalent technique for image acquisition and reconstruction. However, existing deep learning (DL)-based CS methods often encounter challenges such as block artifacts and information loss during iterative reconstruction, particularly at low sampling rates, resulting in a reduction of reconstructed details. To address these issues, we propose NesTD-Net, an unfolding-based architecture inspired by the NESTA algorithm, designed for image CS. NesTD-Net integrates DL modules into NESTA iterations, forming a deep network that continuously iterates to minimize the l1 -norm CS problem, ensuring high-quality image CS. Utilizing a learned sampling matrix for measurements and an initialization module for initial estimate, NesTD-Net then introduces Iteration Sub-Modules derived from the NESTA algorithm (i.e., Yk , Zk , and Xk ) during reconstruction stages to iteratively solve the l1 -norm CS reconstruction. Additionally, NesTD-Net incorporates a Dual-Path Deblocking Structure (DPDS) to facilitate feature information flow and mitigate block artifacts, enhancing image detail reconstruction. Furthermore, DPDS exhibits remarkable versatility and demonstrates seamless integration with other unfolding-based methods, offering the potential to enhance their performance in image reconstruction. Experimental results demonstrate that our proposed NesTD-Net achieves better performance compared to other state-of-the-art methods in terms of image quality metrics such as SSIM and PSNR, as well as visual perception on several public benchmark datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.