Abstract

Image super-resolution (SR) techniques improve various remote sensing applications by allowing for finer spatial details than those captured by the original acquisition sensors. Recent advances in deep learning bring a new opportunity for SR by learning the mapping from low to high resolution. The most used convolutional neural networks (CNN) based approaches are prone to excessive smoothing or blurring due to the optimization objective in mean squared error (MSE). Instead, generative adversarial network (GAN) based approaches can achieve more perceptually acceptable results. However, the preliminary design of GANs generator with simple direct- or skip-connection residual blocks compromises its SR potential. Emerging dense convolutional network (DenseNet) equipped with dense connections has shown a promising prospect in classification and super-resolution. An intuitive idea to introduce DenseNet into GAN is expected to boost SR performance. However, because convolutional kernels in the existing residual block are arranged into a one-dimensional flat structure, the formation of dense connections highly relies on skip connections (linking the current layer to all subsequent layers with a shortcut path). In order to increase connection density, the depth of the layer has to be accordingly expanded, which in turn results in training difficulties such as vanishing gradient and information propagation loss. To this end, this paper proposes an ultra-dense GAN (udGAN) for image SR, where we reform the internal layout of the residual block into a two-dimensional matrix topology. This topology can provide additional diagonal connections so that we can still accomplish enough pathways with fewer layers. In particular, the pathways are almost doubled compared to previous dense connections under the same number of layers. The achievable rich connections are flexibly adapted to the diversity of image content, thus leading to improved SR performance. Extensive experiments on public benchmark datasets and real-world satellite imagery show that our model outperforms state-of-the-art counterparts in both subjective and quantitative assessments, especially those related to perception.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call