Abstract

AbstractExploring and exploiting discriminative multi-level information is crucial for Convolutional Neural Networks (CNNs) based remote sensing image semantic segmentation. However, most existing methods either briefly concatenate multi-level features or calculate element-wise addition of them, which may result in inadequate utilization of cross-level information. To deal with this pressing problem, we propose a Multi-level Feature Enhancement and Interaction Network (MFEINet) for remote sensing image semantic segmentation. Concretely, a novel Attention-induced Feature Enhancement Module (AFEM) is presented to excavate more discriminative multi-level features from the local and global views. Meanwhile, a tailor-made Multi-level Interaction Module (MIM) is proposed to effectively integrate the complementary information among the multi-level features. Comprehensive experiments and analyses on WHDLD dataset demonstrate the superiorities of our proposed modules and network.KeywordsRemote sensing imageSemantic segmentationMulti-level features enhancementMulti-level features interaction

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call