Abstract

Semantic segmentation of high-resolution aerial images is a challenging task on account of inter-class homogeneity and intra-class heterogeneity of land cover. Recent works have sought to mitigate this issue by exploiting pixel-wise global contextual information using self-attention mechanism. However, the existing attention-based methods usually generate inaccurate object boundary segmentation results, as the self-attention model is embedded in high-level features with low resolution due to prohibitively computational complexity. Moreover, existing attention-based models ignore class-wise contextual information from intermediate results, which leads to undesirable feature separability. To obtain discriminative feature as well as generate accurate segmentation boundaries, we present a novel segmentation framework, named Cascade Class-aware Enhanced Network (CCENet) for high resolution aerial imagery. The proposed CCENet predicts segmentation results on multiple stages, and the result of the previous stage is used to refine object boundary details for the latter stage. To exploit the class-aware prior information in previous stage, we propose a lightweight Class-aware Enhanced Module (CaEM) to grab the class-aware contextual dependencies. Specifically, CaEM first extract a set of class representation of the land covers by Global Class Pooling (GCP) block, then reconstruct enhanced feature using Class Relation Measurement (CRM), which alleviates the inter-class homogeneity and intra-class heterogeneity of ground objects in feature space. Quantitative and qualitative experimental results on three publicly available datasets demonstrate the superiority of our CCENet over other state-of-the-art methods in the items of high labelling accuracy and computation efficiency.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.