High resolution remote sensing imagery is used in a broad range of tasks, including detection and classification of objects. High-resolution imagery is however expensive to obtain, while lower resolution imagery is often freely available and can be used for a range of social good applications. To that end, we curate a multi-spectral multi-image dataset for super-resolution of satellite images. We use PlanetScope imagery from the SpaceNet-7 challenge as the high resolution reference and multiple Sentinel-2 revisits of the same location as the low-resolution imagery. We present the first results of applying multi-image super-resolution (MISR) to multi-spectral remote sensing imagery. We, additionally, introduce a radiometric-consistency module into the MISR model to preserve the high radiometric resolution and quality of the Sentinel-2 sensor. We show that MISR is superior to single-image super-resolution (SISR) and other baselines on a range of image fidelity metrics. Furthermore, we present the first assessment of the utility of multi-image super-resolution on a semantic and instance segmentation – common remote sensing tasks – showing that utilizing multiple images results in better performance in these downstream tasks, but MISR pre-processing is non-essential.
Read full abstract