With the development of CNNs and the application of transformers, the segmentation performance of high-resolution remote sensing image semantic segmentation models has been significantly improved. However, the issue of category imbalance in remote sensing images often leads to the model’s segmentation ability being biased towards categories with more samples, resulting in suboptimal performance for categories with fewer samples. To make the network’s learning and representation capabilities more balanced across different classes, in this paper we propose a category-based interactive attention and perception fusion network (CIAPNet), where the network divides the feature space by category to ensure the fairness of learning and representation for each category. Specifically, the category grouping attention (CGA) module utilizes self-attention to reconstruct the features of each category in a grouped manner, and optimize the foreground–background relationship and its feature representation for each category through the interactive foreground–background relationship optimization (IFBRO) module therein. Additionally, we introduce a detail-aware fusion (DAF) module, which uses shallow detail features to complete the semantic information of deep features. Finally, a multi-scale representation (MSR) module is deployed for each class in the CGA and DAF modules to enhance the description capability of different scale information for each category. Our proposed CIAPNet achieves mIoUs of 54.44%, 85.71%, and 87.88% on the LoveDA urban–rural dataset, and the International Society for Photogrammetry and Remote Sensing (ISPRS) Vaihingen and Potsdam urban datasets, respectively. Compared with current popular methods, our network not only achieves excellent performance but also demonstrates outstanding class balance.
Read full abstract