When Self-Supervised Pre-Training Meets Single Image Denoising

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

We present a self-supervised pre-training scheme for single image denoising based on a novel pretext task. Our work is inspired by the success of self-supervised learning (SSL) methods in transfer learning. These methods have been shown to be extremely effective when used to pre-train a model that is then fine-tuned on small datasets. As pretext task, we propose to train a denoising network on patches of the downsampled input image, which we treat as pseudo-clean image patches, and an adaptive noise estimator to learn the specific noise distribution of the input image. By carrying out the pre-training on the single input image, rather than on a separate dataset, we avoid the well-known noise distribution gap between images in the training dataset and the single input image used at test time. We evaluate our SSL method for single image denoising via extensive experiments on both synthetic and real-world noisy image datasets. We demonstrate SotA results compared to existing unsupervised denoising methods, by transferring our pre-training to IDR [1], thus showing that SSL pre-training is a promising framework also in image denoising. Website: https://hamadichihaoui.github.io/SSL-Denoising.

Similar Papers
  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.radonc.2025.111297
A foundation model for brain tumor MRI analysis: WHO grading and subtype classification.
  • Jan 1, 2026
  • Radiotherapy and oncology : journal of the European Society for Therapeutic Radiology and Oncology
  • Junxian Li + 5 more

A foundation model for brain tumor MRI analysis: WHO grading and subtype classification.

  • Conference Article
  • Cite Count Icon 31
  • 10.24963/ijcai.2022/159
Self-supervised Learning and Adaptation for Single Image Dehazing
  • Jul 1, 2022
  • Yudong Liang + 4 more

Existing deep image dehazing methods usually depend on supervised learning with a large number of hazy-clean image pairs which are expensive or difficult to collect. Moreover, dehazing performance of the learned model may deteriorate significantly when the training hazy-clean image pairs are insufficient and are different from real hazy images in applications. In this paper, we show that exploiting large scale training set and adapting to real hazy images are two critical issues in learning effective deep dehazing models. Under the depth guidance estimated by a well-trained depth estimation network, we leverage the conventional atmospheric scattering model to generate massive hazy-clean image pairs for the self-supervised pre-training of dehazing network. Furthermore, self-supervised adaptation is presented to adapt pre-trained network to real hazy images. Learning without forgetting strategy is also deployed in self-supervised adaptation by combining self-supervision and model adaptation via contrastive learning. Experiments show that our proposed method performs favorably against the state-of-the-art methods, and is quite efficient, i.e., handling a 4K image in 23 ms. The codes are available at https://github.com/DongLiangSXU/SLAdehazing.

Save Icon
Up Arrow
Open/Close