Abstract

A new methodology is proposed for the enhancement of endmember (EMs) fractions’ maps. The new method, termed DFNeFE (data fusion through neural-network for fraction estimation), is based on the fusion of a multispectral image, with low spatial resolution (LSR) and a visible RGB image, with high spatial restitution (HSR), through a back propagation neural network (BPNN). First, the fraction maps of a set of EMs are estimated for the spectral image using an accurate unmixing method. Then spatial statistical features (SSFs) are extracted from both images and a BPNN is trained to learn the relationship between the fractions, the visible bands of the HSR image and the SSFs based on invariant points (IPs) which are assumed to have the same land cover type in both the multispectral and visible images. Using an automatic method for IP extraction, we can also apply our method to images that are not co-registered. An evaluation of the proposed method, is carried out using a real data set with two spectral images acquired by Landsat -8 and Sentine1–2 satellites, and an RGB image available in Google Earth. An experimental testing, with relativity to the sparse unmixing by variable splitting and augmented Lagrangian (SUnSAL), shows that the proposed DFNeFE method obtains fraction maps with a significantly enhanced spatial resolution (SR) and an average mean absolute error (MAE) of ~ 4%

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call