Abstract

Deep learning-based hyperspectral image superresolution methods have achieved great success recently. However, most methods utilize 2D or 3D convolution to explore features, and rarely combine the two types of convolution to design networks. Moreover, when the model only contains 3D convolution, almost all the methods take all the bands of hyperspectral image as input to analyze, which requires more memory footprint. To address these issues, we explore a new structure for hyperspectral image superresolution using spectrum and feature context. Inspired by the high similarity among adjacent bands, we design a dual-channel network through 2D and 3D convolution to jointly exploit the information from both single band and adjacent bands, which is different from previous works. Under the connection of depth split, it can effectively share spatial information so as to improve the learning ability of 2D spatial domain. Besides, our method introduces the features extracted from previous band, which contributes to the complementarity of information and simplifies the network structure. Through feature context fusion, it significantly enhances the performance of the algorithm. Extensive evaluations and comparisons on three public datasets demonstrate that our approach produces the state-of-the-art results over the existing approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call