Abstract

We propose an unmixing framework for enhancing endmember fraction maps using a combination of spectral and visible images. The new method, data fusion through spatial information-aided learning (DFuSIAL), is based on a learning process for the fusion of a multispectral image of low spatial resolution and a visible RGB image of high spatial resolution. Unlike commonly used methods, DFuSIAL allows for fusing data from different sensors. To achieve this objective, we apply a learning process using automatically extracted invariant points, which are assumed to have the same land cover type in both images. First, we estimate the fraction maps of a set of endmembers for the spectral image. Then, we train a spatial-features aided neural network (SFFAN) to learn the relationship between the fractions, the visible bands, and rotation-invariant spatial features for learning (RISFLs) that we extract from the RGB image. Our experiments show that the proposed DFuSIAL method obtains fraction maps with significantly enhanced spatial resolution and an average mean absolute error between 2% and 4% compared to the reference ground truth. Furthermore, it is shown that the proposed method is preferable to other examined state-of-the-art methods, especially when data is obtained from different instruments and in cases with missing-data pixels.

Highlights

  • Imaging spectrometers collect a large number of samples of the reflected light at different wavelengths along the electromagnetic spectrum [1]

  • The results show that both PSQ-pan sharpening (PS) and data fusion through spatial information-aided learning (DFuSIAL) provide reliable results, with an average mean absolute error (MAE) between 2% and 4%

  • Image fusion methods usually require datasets form the same sensor and that overlap both geometrically and temporally. Addressing these two limitations, we developed a new methodology for enhancing the spatial resolution (SR) of the fraction maps through data fusion of high SR (HSR) visible images and low SR (LSR) spectral images

Read more

Summary

Introduction

Imaging spectrometers collect a large number of samples of the reflected light at different wavelengths along the electromagnetic spectrum [1]. Each pixel of the spectral image holds a spectral signature that describes the chemical and physical characteristics of the surface [2] This amount of valuable information can be used in critical image-based geoscience applications [3]. Traditional unmixing methods rely only on the spectral data of the image, whereas spatially adaptive methods incorporate the spatial information of the image to enhance the accuracy of the estimated fraction. In both cases, there is no information regarding the spatial distribution of the EMs within the pixel area and so the SR of the extracted fraction maps is limited and is as low as the SR of the spectral image

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call