Abstract

Recent advancements in Few-shot segmentation (FSS) have displayed remarkable capabilities in predicting segmentation masks for unseen class images, using only a limited number of annotated images. However, existing methods have overlooked the influence of contextual information on segmentation and primarily rely on supporting prototypes, with limited research on query prototyping. Effectively utilizing multi-scale features and query information poses a challenging problem in this domain. To address these challenges, this paper proposes a novel approach called the multi-scale and attention-based self-support prototype few-shot semantic segmentation network (MASNet). First, a multi-scale feature enhancement module is designed to obtain features at different scales to enrich global context information. Then, simple and efficient channel attention is utilized to guide the query features related to the target class. Finally, the query prototype is matched with the query features using a self-supporting matching module. This strategy efficiently captures class-based features and addresses the issue of intra-class variance in few-shot segmentation. The experimental results on Pascal-5i, COCO-20i and Abdominal MRI datasets demonstrate that the proposed method achieves remarkable robustness and improved accuracy performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call