Abstract
Few-shot Semantic Segmentation (FSS) attempts to segment the new category with only a few labeled samples, presenting a significant challenge. Existing approaches primarily focus on leveraging category information from the support set to identify objects of the new category in the query image. However, these models often struggle when confronted with substantial differences between paired images. To address issues stemming from scenario differences and intra-class diversity, this paper proposes an adaptive similarity-guided self-merging network. Firstly, style differences of multi-level features are introduced to alleviate the network's sensitivity to scenario variations and learn an adaptive weight for the K-shot scheme. Secondly, a feature-mask bi-aggregation module is designed to learn an enhanced feature and an initial mask for the query image. Within this module, dynamic correlations cover all the spatial locations, providing global information crucial for feature and mask aggregation. Subsequently, a self-merging module is proposed to alleviate prototype bias. It merges a self-prototype derived from the initial mask with an adaptive weighted support prototype obtained from K support images. Finally, the target object is segmented using the enhanced feature and merging prototype, and segmentation results are further refined by predictions of base categories and an adjustment factor derived from multi-level style differences. The proposed method achieves 69.1% (1-shot) and 72.3% (5-shot) mIoU on the PASCAL-5i dataset, and 47.4% (1-shot) and 52.1% (5-shot) mIoU on the COCO-20i dataset. These results demonstrate state-of-the-art segmentation performance compared to mainstream methods.© 2017 Elsevier Inc. All rights reserved.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.