Abstract

A novel cascade learning framework to incrementally train deeper and more accurate convolutional neural networks is introduced. The proposed cascade learning facilitates the training of deep efficient networks with plain convolutional neural network (CNN) architectures, as well as with residual network (ResNet) architectures. This is demonstrated on the problem of image super-resolution (SR). We show that cascade-trained (CT) SR CNNs and CT-ResNets can achieve state-of-the-art results with a smaller number of network parameters. To further improve the network's efficiency, we propose a cascade trimming strategy that progressively reduces the network size, proceeding by trimming a group of layers at a time, while preserving the network's discriminative ability. We propose context network fusion (CNF) as a method to combine features from an ensemble of networks through context fusion layers. We show that CNF of an ensemble of CT SR networks can result in a network with better efficiency and accuracy than that of other fusion methods. CNF can also be trained by the proposed edge-aware loss function to obtain sharper edges and improve the perceptual image quality. Experiments on benchmark datasets show that our proposed deep convolutional networks achieve state-of-the-art accuracy and are much faster than existing deep super-resolution networks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.