Abstract
Recently, remote sensing community has seen a surge in the use of multimodal data for different tasks such as land cover classification, change detection and many more. However, handling multimodal data requires synergistically using the information from different sources. Currently, deep learning (DL) techniques are being religiously used in multimodal data fusion owing to their superior feature extraction capabilities. But, DL techniques have their share of challenges. Firstly, DL models are mostly constructed in the forward fashion limiting their feature extraction capability. Secondly, multimodal learning is generally addressed in a supervised setting, which leads to high labelled data requirement. Thirdly, the models generally handle each modality separately, thus preventing any cross-modal interaction. Hence, we propose a novel self-supervision oriented method of multimodal remote sensing data fusion. For effective cross-modal learning, our model solves a self-supervised auxiliary task to reconstruct input features of one modality from the extracted features of another modality, thus enabling more representative pre-fusion features. To counter the forward architecture, our model is composed of convolutions both in backward and forward directions, thus creating self-looping connections, leading to a self-correcting framework. To facilitate cross-modal communication, we have incorporated coupling across modality-specific extractors using shared parameters. We evaluate our approach on three remote sensing datasets, namely Houston 2013 and Houston 2018, which are HSI-LiDAR datasets and TU Berlin, which is an HSI-SAR dataset, where we achieve the respective accuracy of 93.08%, 84.59% and 73.21%, thus beating the state of the art by a minimum of 3.02%, 2.23% and 2.84%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.