Abstract

ABSTRACT Deep learning has achieved remarkable success in the semantic segmentation of remote sensing images (RSIs).In the domain of semantic segmentation, where classification and localization tasks need to be performed simultaneously, it is crucial to consider both global and local spatial relationships in RSIs. This is especially important for the recognition of ground objects that have a slim and elongated appearance. However, existing methods for land use semantic segmentation lack an effective mechanism to coordinate and address these two aspects, resulting in limitations on the recognition of slim targets and the continuity of land object identification. Here, a specific attention-based network called PaANet is developed for semantic segmentation. Our proposed framework builds upon the Swin transformer by incorporating two key modules: the position-aware attention (PaA) module and the pyramid pooling expectation-maximization (PPEM) module. These modules provide significant improvements in recognition accuracy and the continuity of ground object recognition while preserving structural classification details. Furthermore, we propose a multiresolution data augmentation method that utilizes scale-related information to guide the encoder. This approach leads to improved performance and generalization ability for the model. In experiments, the mIoU of our approach for the BLU and GID datasets is 2.37% and 3.94% higher than that of the baseline model, respectively. Our results also show significant superior to those of other methods regarding the continuity of ground object recognition.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.