Abstract

In this paper, we propose a hardware-efficient residual recurrent neural network for real-time video super-resolution (VSR) based on field programmable gate array (FPGA). Although recent learning-based VSR methods have achieved remarkable performance, the large computational complexity prohibits the deployment of the sophisticated VSR models on FPGA for real-time applications. Limited by the hardware resources, state-of-the-art FPGA-based VSR methods perform single-image super-resolution over the video sequence and suffer from temporal inconsistency. In order to exploit the inter-frame temporal correlation for real-time VSR on low-complexity hardware, we introduce a hardware-efficient recurrent neural network ERVSR. Specially, the proposed ERVSR leverages the input frame and the temporal information entailed in the hidden state to reconstruct the high-resolution counterpart. To reduce the network parameters, the low-resolution input branch and the hidden state branch are convolved individually and a channel modulation coefficient is proposed to explicitly guide the network to allocate the amount of output feature channels to each branch. Additionally, in order to reduce the memory consumption, we perform a dedicated lightweight compression of the hidden state by introducing a statistical normalization scheme followed by a fixed-point quantization. Besides, we adopt group convolution and depthwise separable convolution to further compact the network. We evaluated the proposed ERVSR on multiple public datasets from different aspects. Experimental results demonstrate that ERVSR performs better than the existing state-of-the-art FPGA-based VSR methods in both image quality and data throughput.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.