Abstract

With the emergence of new remote sensing modalities, it becomes increasingly important to find novel algorithms for fusion and integration of different types of data for the purpose of improving performance of applications, such as target/anomaly detection or classification. Many popular techniques that deal with this problem are based on performing multiple classifications and fusing these individual results into one product. In this paper we provide a new approach, focused on creating joint representations of the multi-modal data, which then can be subject to analysis by state of the art classifiers. In the work presented in this paper we consider the problem of spatial-spectral fusion for hyperspectral imagery. Our approach involves machine learning techniques based on analysis of joint data-dependent graphs and the resulting data-dependent fusion operators and their representations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call