Abstract
Few-shot segmentation (FSS) has made remarkable success through prototypical learning. However, owing to the scarcity of support data, FSS methods continue to suffer from huge intra-class and inter-class gaps. In this paper, we introduce a unified network, termed FGNet++, aimed at addressing both gaps. FGNet++ comprises a Self-Adaptive Module (SAM) to emphasize the query features and generate enhanced prototypes for self-alignment. These prototypes contain the intrinsic information of each query sample, thereby mitigating the intra-class appearance gaps. Moreover, we augment SAM with a Feature Transformation Module (FTM) to further reduce intra-class appearance discrepancies. On the other hand, we introduce an Inter-class Feature Separation Module (IFSM) dedicated to bridge the inter-class gap. With a B-SLIC based background loss and a cross-category loss for training, IFSM makes it easy to distinguish the feature space of the target class from other non-target classes. Furthermore, to assess the generality of our approach, we extend our work to 3D point cloud few-shot segmentation and present FGNet-3D. Experimental results demonstrate that our method successfully mitigates both intra-class and inter-class gaps in FSS through SAM and IFSM, respectively, and achieves state-of-the-art performances on multiple datasets compared with previous approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.