Abstract

Single image super-resolution with diffusion probabilistic models (SRDiff) is a successful diffusion model for image super-resolution that produces high-quality images and is stable during training. However, due to the long sampling time, it is slower in the testing phase than other deep learning-based algorithms. Reducing the total number of diffusion steps can accelerate sampling, but it also causes the inverse diffusion process to deviate from the Gaussian distribution and exhibit a multimodal distribution, which violates the diffusion assumption and degrades the results. To overcome this limitation, we propose a fast SRDiff (FSRDiff) algorithm that integrates a generative adversarial network (GAN) with a diffusion model to speed up SRDiff. FSRDiff employs conditional GAN to approximate the multimodal distribution in the inverse diffusion process of the diffusion model, thus enhancing its sampling efficiency when reducing the total number of diffusion steps. The experimental results show that FSRDiff is nearly 20 times faster than SRDiff in reconstruction while maintaining comparable performance on the DIV2K test set.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call