Abstract

Hyperspectral images are usually acquired in a scanning-based way, which can cause inconvenience in some situations. In these cases, RGB image spectral super-resolution technology emerges as an alternative. However, current mainstream spectral super-resolution methods aim to generate continuous spectral information at a very narrow range, limited to the visible light range. Some researchers introduce hyperspectral images as auxiliary data. But it is usually required that the auxiliary hyperspectral images have the same spatial range as RGB images. To address this issue, a general point–surface data fusion method is designed to achieve the RGB image spectral super-resolution goal in this paper, named GRSS-Net. The proposed method utilizes hyperspectral point data as auxiliary data to provide spectral reference information. Thus, the spectral super-resolution can extend the spectral reconstruction range according to spectral data. The proposed method utilizes compressed sensing theory as a fundamental physical mechanism and then unfolds the traditional hyperspectral image reconstruction optimization problem into a deep network. Finally, a high-spatial-resolution hyperspectral image can be obtained. Thus, the proposed method combines the non-linear feature extraction ability of deep learning and the interpretability of traditional physical models simultaneously. A series of experiments demonstrates that the proposed method can effectively reconstruct spectral information in RGB images. Meanwhile, the proposed method provides a framework of spectral super-resolution for different applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call