Abstract

We propose a new methodology for enhancing the spatial resolution of unsupervised classification through a fusion of multispectral and visible images. The new method, DFuSIAL-C (Data Fusion through Spatial Information-Aided Learning for Classification), relies on automatically extracted invariant points (IPs), assumed to have the same land cover type in the two data sources. In contrast to typical methods, DFuSIAL-C does not require a full spatial, spectral, and temporal overlapping between the data sources and allows for the fusion of data from different sensors. An evaluation of the proposed method, compared to a state-of-the-art pansharpening fusion method, is carried out using Landsat-8 and Sentinel-2 images. Our experimental results show that the DFuSIAL-C obtains unsupervised classification maps with a significantly enhanced spatial resolution and an overall accuracy (OA) of 85%. Furthermore, we show that the proposed method is preferable when full overlapping is not available due to the acquisition by different instruments.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.