Abstract

The extraction of urban structures such as buildings from very high-resolution (VHR) remote sensing imagery has improved dramatically, thanks to recent developments in deep multimodal fusion models. However, Due to the variety of colour intensities with complex textures of building objects in VHR images and the low quality of the digital surface model (DSM), it is challenging to develop the optimal cross-modal fusion network that takes advantage of these two modalities. This research presents an end-to-end cross-modal gated fusion network (CMGFNet) for extracting building footprints from VHR remote sensing images and DSMs data. The CMGFNet extracts multi-level features from RGB and DSM data by using two separate encoders. We offer two methods for fusing features in two modalities: Cross-modal and multi-level feature fusion. For cross-modal feature fusion, a gated fusion module (GFM) is proposed to combine two modalities efficiently. The multi-level feature fusion fuses the high-level features from deep layers with shallower low-level features through a top-down strategy. Furthermore, a residual-like depth-wise separable convolution (R-DSC) is introduced to enhance the performance of the up-sampling process and decrease the parameters and time complexity in the decoder section. Experimental results from challenging datasets show that the CMGFNet outperforms other state-of-the-art models. The efficacy of all significant elements is also confirmed by the extensive ablation study.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.