Abstract

Recent classifications in Earth Observation (EO) commonly involve a combination of Hyperspectral Image (HSI) and Light Detection and Ranging (LiDAR) signals. However, many current methods fail to consider the HSI-LiDAR information concurrently, especially in terms of both its intra- and inter-modality aspects. Additionally, current methods are generally limited in their ability to fuse the features extracted from different modalities. Hence, this paper proposes a center-bridged framework, called Interaction Fusion (IF), that can leverage diverse information concerning the intra- and inter-modality relationships at the same time. More specifically, intra- and inter-modality information can be enriched by introducing the center patch of HSI (cp-HSI) as an extra input, This introduces additional contextual information within and across modalities that can be leverage for deeper insights. Further, we propose a fusion matrix as a structural feature map designed to integrate nine views generated by a view generator, enabling the adaptive combination of intra- and inter-modality information. Overall, our approach allows potential patterns to be captured, while mitigating any bias resulting from incomplete information. Extensive experiments conducted on three widely recognized datasets – Trento, MUUFL, and Houston – demonstrate that the IF framework achieves state-of-the-art results, surpassing existing methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call