Abstract
Semantic segmentation of high-resolution optical remote sensing images is an important but challenging task. To solve the problem that many semantic segmentation networks fail to efficiently utilize global and local context information to improve the segmentation performance, this paper proposes a semantic segmentation network based on sparse self-attention (SDANet) to model the global context dependencies. Specifically, the feature maps are first divided into four regions in spatial and channel dimensions, respectively, and the divided feature maps are rearranged to form new regions. Second, the position and channel self-attention operations are performed on the rearranged regions. Third, the feature maps are restored to the original combination and the position together with channel self-attention operations are performed again to obtain the output feature maps. Finally, semantic segmentation is completed based on the output feature maps. Extensive experiments conducted on the ISPRS Vaihingen dataset demonstrate that the proposed method is superior to self-attention-based DANet, CCNet, and other general semantic segmentation networks, such as FCN, Deeplabv3+, HRNet, etc.
Full Text
Topics from this Paper
Semantic Segmentation Networks
Semantic Segmentation
Feature Maps
Output Feature Maps
Segmentation Of Remote Sensing Images
+ Show 5 more
Create a personalized feed of these topics
Get StartedTalk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
IEEE Geoscience and Remote Sensing Letters
Jan 1, 2021
IEEE Geoscience and Remote Sensing Letters
Jan 1, 2022
Nov 1, 2017
Canadian Journal of Remote Sensing
Jan 2, 2023
Aug 27, 2021
Measurement
Aug 1, 2023
Information Processing & Management
Mar 1, 2022
Nov 17, 2022
IEEE Access
Jan 1, 2021
Remote Sensing
Mar 9, 2020
IEEE Transactions on Geoscience and Remote Sensing
Jan 1, 2022
World Wide Web
Apr 19, 2018
Remote Sensing
Mar 15, 2023
Remote Sensing
Sep 7, 2023
Remote Sensing
Dec 27, 2020