Abstract
In this work, we propose a two-stage denoising approach, which includes generation and fusion stages. Specifically, in the generation stage, we first split the expanding path of the UNet backbone of the standard DIP (deep image prior) network into two branches, converting it into a Y-shaped network (YNet). Then we adopt the initial denoised images obtained with DAGL (dynamic attentive graph learning) and Restormer methods together with the given noisy image as the target images. Finally, we utilize the standard DIP on-line training routine to generate two complementary basic images, whose image quality is quite improved, with the help of a novel automatic iteration termination mechanism. In the fusion stage, we first split the contracting path of the standard UNet network into two branches for receiving the two basic images generated in the previous stage, and obtain a fused image as the final denoised image in a fully unsupervised manner. Extensive experimental results confirm that our method has a significant improvement over the standard DIP or other unsupervised methods, and outperforms recently proposed supervised denoising models. The noticeable performance improvement is attributed to the proposed hybrid strategy, i.e., we first adopt the supervised denoising methods to process the common content of images substantially, then utilize the unsupervised method to fine-tune the specific details. In other words, we take full advantage of the high performance of the supervised methods and the flexibility of the unsupervised methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
More From: Journal of Visual Communication and Image Representation
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.