Sentinel-2 imagery has garnered significant attention in many earth system studies due to free access and high revisit frequency. Since its spatial resolution is insufficient for many applications, e.g., fine-grained land cover mapping, some studies employ fusion technique that combines high-resolution RGB images with Sentinel-2 multispectral images to improve the resolution of the latter. However, there are two issues in the existing image fusion methods. First, these methods usually assume that the time intervals between images are short (within several days), which is a strong assumption for large-scale high-resolution images and many real-world applications. Second, the spectral discrepancy between multispectral and RGB images could induce spectral aberrations in Sentinel-2 imagery upon fusion. To alleviate these issues, we propose an adaptive image fusion approach named S2IFNet, adaptively fusing images with long-time intervals (from months to years) and spectral inconsistency, thereby increasing the multispectral band resolution of Sentinel-2 imagery. Building on top of the feature extraction and fusion modules, we propose a spectral feature compensation module and a change-aware feature reconstruction module. The former alleviates the possible degradation of spectral attributes in Sentinel-2 imagery resulting from feature fusion. The latter integrates semantic and texture information to avoid adding fake textures caused by land cover changes over time. The experiments demonstrate that S2IFNet surpasses existing image fusion and reference-based super-resolution methods on synthetic and real datasets, yielding fusion results that are clearer and more reliable.
Read full abstract