Abstract

Despite being challenging, research on single image based denoising is enjoying a recent upsurge due to the impressive and dominating performance of deep networks. Most of the self supervised single image denoising methods need noisy training pair to learn the denoising function, which, however, often undergoes identity mapping and overfitting while training. A recent work, Self2Self, proposed a Bernoulli-dropout based denoising schema that removes random pixel information to escape identity mapping. Nonetheless, real camera noises are signal dependent, and typically poses trivial changes to the images. Hence, pixels on an area might still preserve similar contextual information even under such a pixel-dropout strategy, raising the chance of suffering identity mapping. In this work, we address this critical issue by generating the training pair by randomly masking out square regions rather than simple Bernoulli-dropping, which provides a better chance to evade the relevant contextual information. Training pair is passed to the network within a self supervised loss to generate single predicted image at each iteration, which are then averaged to acquire the final denoised image. We evaluate our method under the presence of additive and real world noise, and observe better performance against existing traditional and self-supervised based models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call