Abstract
Binary neural network (BNN) is an effective approach to reduce the memory usage and the computational complexity of full-precision convolutional neural networks (CNNs), which has been widely used in the field of deep learning. However, there are different properties between BNNs and real-valued models, making it difficult to draw on the experience of CNN composition to develop BNN. In this article, we study the application of binary network to the single-image super-resolution (SISR) task in which the network is trained for restoring original high-resolution (HR) images. Generally, the distribution of features in the network for SISR is more complex than those in recognition models for preserving the abundant image information, e.g., texture, color, and details. To enhance the representation ability of BNN, we explore a novel activation-rectified inference (ARI) module that achieves a more complete representation of features by combining observations from different quantitative perspectives. The activations are divided into several parts with different quantification intervals and are inferred independently. This allows the binary activations to retain more image detail and yield finer inference. In addition, we further propose an adaptive approximation estimator (AAE) for gradually learning the accurate gradient estimation interval in each layer to alleviate the optimization difficulty. Experiments conducted on several benchmarks show that our approach is able to learn a binary SISR model with superior performance over the state-of-the-art methods. The code will be released at https://github.com/jwxintt/Rectified-BSR.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on neural networks and learning systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.