Abstract
Video Super-Resolution (VSR) is gaining massive attention in the digital age across various application domains such as video surveillance and ultra-high-definition displays. Deep learning (DL) has recently made significant progress in research and industry domains. Existing researchers used numerous VSR methods based on deep learning to improve the resolution. VSR is based on inter-frame information utilization. Video frames provide the leverage information of data that modalities record sequentially, i.e.; the object's temporal changes cause inconsistent measurements. Consequently, suitable models and algorithms must be developed to provide artifact-free images. This paper goes through two reconstruction methods incorporating various motion data sources. Initially, it describes an iterative approach that implicitly treats dynamic behavior as uncertainty in the forward model. Further direct reconstruction techniques can be derived for an explicit motion map. This paper compares both explicit and implicit motion compensation-based methods. Convolutional neural networks (CNN) and recurrent, residual network (RRN) based methods from a deep learning environment are analyzed to reconstruct the High-Resolution (HR) frames. This paper shows numerical findings for both methods using simulated data. Observations show that RRN can boost the super-resolution (SR) performance and achieve considerable visual quality after reconstruction; however, performance is limited to small datasets due to shallow network parameters. Extensive experiments show that RRN significantly impacts SR results due to its substantial computational efficiency.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have