Abstract

AbstractDespite the recent upsurge of self‐supervised methods in single image denoising, achieving robustness and efficiency of performance is still challenging due to some prevalent issues like identity mapping, overfitting, and increased variance of network predictions. Recent self‐supervised approaches prescribe a dropout‐based single‐pixel masking strategy in this regard. However, real camera noise is signal‐dependent, and typically poses trivial changes to the images. Hence, such a strategy still preserves contextual information about target location even after dropping them out, leading to an identity mapping and overfitting problems in practice. Here, Cut2Self, a new denoising method to address this issue, which cuts out random block‐regions instead of singleton pixels to provide the higher possibility to remove contextual information from the neighbouring pixels, thus reducing identity mapping chances while being resilient against overfitting is proposed. Cut2Self creates distinct training pairs for each training iteration by randomly cutting out square regions of input and sending them to the denoising network. Thus, iteration‐wise different network predictions are generated, which are then assembled to generate the final denoised output. Cut2Self is evaluated with synthetic and real‐world noise, visualising its consistent denoising performance compared to other supervised, unsupervised, and self‐supervised methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call