Abstract

AbstractExploring and exploiting discriminative multi-level information is crucial for Convolutional Neural Networks (CNNs) based remote sensing image semantic segmentation. However, most existing methods either briefly concatenate multi-level features or calculate element-wise addition of them, which may result in inadequate utilization of cross-level information. To deal with this pressing problem, we propose a Multi-level Feature Enhancement and Interaction Network (MFEINet) for remote sensing image semantic segmentation. Concretely, a novel Attention-induced Feature Enhancement Module (AFEM) is presented to excavate more discriminative multi-level features from the local and global views. Meanwhile, a tailor-made Multi-level Interaction Module (MIM) is proposed to effectively integrate the complementary information among the multi-level features. Comprehensive experiments and analyses on WHDLD dataset demonstrate the superiorities of our proposed modules and network.KeywordsRemote sensing imageSemantic segmentationMulti-level features enhancementMulti-level features interaction

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.