Abstract

The existing methods based on the convolutional neural network (CNN) show excellent performance in single image super-resolution (SISR). Massive convolutional layers can enhance these CNN-based methods, but they increase memory consumption due to a large number of parameters. In addition, the unilateral constraint on SISR only focuses on the up-sampling process and ignores image degradation, limiting the accuracy of convergence in network training. In response to these problems, we propose a lightweight network with bidirectional constraints (LNBC) for SISR. We present an extended layer named enhanced cycle residual block (CRB), then develop a lightweight network with CRB as the feature inference layer. CRB improves feature expression ability by alternating and multiplexing convolutional layers without increasing parameters. In addition, unlike existing methods that only constrain the up-sampling process, we propose bidirectional constraints on SISR named cycle consistency verification (CCV). In CCV, the introduced degradation network simulates image degradation, and provides constraints on image degradation for the up-sampling network in joint training. Image up-sampling and degradation constitute bidirectional constraints to tighten the convergence of the up-sampling network. Experiments show that LNBC advances comparison methods in the compromise of image super-resolution performance, memory overhead and running time.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.