Abstract
For flash guided non-flash image denoising, the main challenge is to explore the consistency prior between the two modalities. Most existing methods attempt to model the flash/non-flash consistency in pixel level, which may easily lead to blurred edges. Different from these methods, we have an important finding in this paper, which reveals that the modality gap between flash and non-flash images conforms to the Laplacian distribution in gradient domain. Based on this finding, we establish a Laplacian gradient consistency (LGC) model for flash guided non-flash image denoising. This model is demonstrated to have faster convergence speed and denoising accuracy than the traditional pixel consistency model. Through solving the LGC model, we further design a deep network namely LGCNet. Different from existing image denoising networks, each component of the LGCNet strictly matches the solution of LGC model, giving the network good interpretability. The performance of the proposed LGCNet is evaluated on three different flash/non-flash image datasets, which demonstrates its superior denoising performance over many state-of-the-art methods both quantitatively and qualitatively. The intermediate features are also visualized to verify the effectiveness of the Laplacian gradient consistency prior. The source codes are available at https://github.com/JingyiXu404/LGCNet.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.