Abstract

Weakly supervised learning using image-level annotations has become a popular choice for reducing labeling efforts of remote sensing object extraction. Existing methods exploit inter-pixel relations within an individual image patch for object localizations. When facing large-scale remote sensing images, it is still challenging to obtain global semantic contexts across image patches for feature representation, resulting in inaccurate object localizations. To remedy these issues, we propose a local-global anchor guidance network (LGAGNet) for weakly supervised landslide extraction. Specifically, a structure-aware object locating (SOL) module is developed to capture the spatial structure of landslide objects and extract local category anchors containing informative feature embeddings. Furthermore, we leverage a global anchor aggregation (GAA) module to excavate semantic patterns across image patches based on a memory bank, which is then used as additional context cues to enhance the feature presentation through a cross-attention mechanism. Finally, a hybrid loss function is designed to guide the network training, considering category-aware semantic contrasts and local activation consistency. Experimental results on high-resolution aerial and satellite image datasets verify the effectiveness of the proposed approach on landslide extraction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.