Abstract

Abstract: Super-resolution (SR) techniques aim to enhance the resolution and quality of low-resolution images. In recent years, deep learning-based approaches have achieved remarkable success in this field. However, training deep SR models often requires large-scale datasets and computationally expensive operations, limiting their practicality. Different approaches have been proposed to tackle computation and time constraints. Our novel approach involves a combination of knowledge distillation and contrastive loss function to train a compact and efficient contrastive self-distillation (CSD) model. In the framework, a teacher network is initially trained on a large dataset using a traditional supervised learning approach. The teacher network learns to generate high-resolution images from low-resolution inputs. Subsequently, a student network is initialized with the same architecture as the teacher network but with fewer parameters. The student network is then trained to mimic the behavior of the teacher network by utilizing a contrastive loss. The contrastive loss is formulated by constructing positive and negative pairs of low-resolution and high-resolution image patches. The proposed CSD method achieves competitive performance compared to state-of-the-art SR methods while requiring significantly fewer parameters and computational resources. Furthermore, it demonstrates improved generalization capability by effectively reconstructing details in real-world images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call