Abstract

A common issue of deep neural networks-based methods for the problem of Single Image Super-Resolution (SISR), is the recovery of finer texture details when super-resolving at large upscaling factors. This issue is particularly related to the choice of the objective loss function. In particular, recent works proposed the use of a VGG loss which consists in minimizing the error between the generated high resolution images and ground-truth in the feature space of a Convolutional Neural Network (VGG19), pre-trained on the very “large” ImageNet dataset. When considering the problem of super-resolving images with a distribution “far” from the ImageNet images distribution (e.g., satellite images), their proposed fixed VGG loss is no longer relevant. In this paper, we present a general framework named Generative Collaborative Networks (GCN), where the idea consists in optimizing the generator (the mapping of interest) in the feature space of a features extractor network. The two networks (generator and extractor) are collaborative in the sense that the latter “helps” the former, by constructing discriminative and relevant features (not necessarily fixed and possibly learned mutually with the generator). We evaluate the GCN framework in the context of SISR, and we show that it results in a method that is adapted to super-resolution domains that are “far” from the ImageNet domain.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call