Abstract

Video Super Resolution (VSR) applications extensively utilize deep learning-based methods. Several VSR methods primarily focus on improving the fine-patterns within reconstructed video frames. It frequently overlooks the crucial aspect of keeping conformation details, particularly sharpness. Therefore, reconstructed video frames often fail to meet expectations. In this paper, we propose a Conformation Detail-Preserving Network (CDPN) named as SuperVidConform. It focuses on restoring local region features and maintaining the sharper details of video frames. The primary focus of this work is to generate the high-resolution (HR) frame from its corresponding low-resolution (LR). It consists of two parts: (i) The proposed model decomposes confirmation details from the ground-truth HR frames to provide additional information for the super-resolution process, and (ii) These video frames pass to the temporal modelling SR network to learn local region features by residual learning that connects the network intra-frame redundancies within video sequences. The proposed approach is designed and validated using VID4, SPMC, and UDM10 datasets. The experimental results show the proposed model presents an improvement of 0.43 dB (VID4), 0.78 dB (SPMC), and 0.84 dB (UDM10) in terms of PSNR. Further, the CDPN model set new standards for the performance of self-generated surveillance datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call