Abstract

Hyperspectral cameras provide high spectral resolution data, but their usual low spatial resolution when compared to color (RGB) instruments is still a limitation for more detailed studies. This article presents a simple yet powerful method for fusing co-registered high spatial and low spectral resolution image data – e.g. RGB – with low spatial and high spectral resolution data – Hyperspectral. The proposed method exploits the overlap in observed phenomena by the two cameras to create a model through least square projections. This yields two images: 1) A high-resolution image spatially correlated with the input RGB image but with more spectral information than just the 3 RGB bands. 2) A low-resolution image showing the spectral information what is spatially uncorrelated with the RGB image. We show results for semi-artificial benchmark datasets and a real-world application. Performance metrics indicate the method is well suited for data enhancement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call