Abstract

Hyperspectral imaging simultaneously captures images of the same scene across many numbers of spectral channels, and has different applications from agriculture, astronomy to surveillance and mineralogy, to name a few. However, due to various hardware limitations, the current hyperspectral sensor only provides low-resolution (LR) hyperspectral images compared with the RGB images obtained from a common color camera. Thus fusing a LR hyperspectral image with the corresponding high-resolution (HR) RGB image to recover a HR hyperspectral image has attracted much attention, and is usually solved as an optimization problem with prior-knowledge constraints such as sparsity representation and spectral physical properties. Motivated by the great success of deep convolutional neural network (DCNN) in many computer vision tasks, this study aims to design a novel DCNN architecture for effectively fusing the LR hyperspectral and HR-RGB images. Taking consideration of the large resolution difference in spatial domain of the observed RGB and hyperspectral images, we propose a multi-scale DCNN via gradually reducing the feature sizes of the RGB images and increasing the feature sizes of the hyperspectral image for fusion. Furthermore, we integrate multi-level cost functions into the proposed multi-scale fusion CNN architecture for alleviating the gradient vanish problem in training procedure. Experiment results on benchmark datasets validate that the proposed multi-level and multi-scale spatial and spectral fusion CNNs outperforms the state-of-the-art methods in both quantitative values and visual qualities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call