Abstract
Super-resolution in image and video processing has been a challenge in computer vision, with its progression creating substantial societal ramifications. More specifically, video super-resolution methodologies aim to restore spatial details while upholding the temporal coherence among frames. Nevertheless, their extensive parameter counts and high demand for computational resources challenge the deployment of existing deep convolutional neural networks on mobile platforms. In response to these concerns, our research undertakes an in-depth investigation into deep convolutional neural networks and offers a lightweight model for video super-resolution, capable of reducing computational load. In this study, we bring forward a unique lightweight model for video super-resolution, the Deep Residual Recursive Network (DRRN). The model applies residual learning to stabilize the Recurrent Neural Network (RNN) training, meanwhile adopting depth-wise separable convolution to boost the efficiency of super-resolution operations. Thorough experimental evaluations reveal that our proposed model excels in computational efficiency and in generating refined and temporally consistent results for video super-resolution. Hence, this research presents a crucial stride toward applying video super-resolution strategies on devices with resource limitations.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.