Abstract

Despite the progress made by few-shot segmentation (FSS) in low-data regimes, the generalization capability of most previous works could be fragile when countering hard query samples with seen-class objects. This paper proposes a fresh and powerful scheme to tackle such an intractable bias problem, dubbed base and meta (BAM). Concretely, we apply an auxiliary branch (base learner) to the conventional FSS framework (meta learner) to explicitly identify base-class objects, i.e., the regions that do not need to be segmented. Then, the coarse results output by these two learners in parallel are adaptively integrated to derive accurate segmentation predictions. Considering the sensitivity of meta learner, we further introduce adjustment factors to estimate the scene differences between support and query image pairs from both style and appearance perspectives, so as to facilitate the model ensemble forecasting. The remarkable performance gains on standard benchmarks (PASCAL-5 i, COCO-20 i, and FSS-1000) manifest the effectiveness, and surprisingly, our versatile scheme sets new state-of-the-arts even with two plain learners. Furthermore, in light of its unique nature, we also discuss several more practical but challenging extensions, including generalized FSS, 3D point cloud FSS, class-agnostic FSS, cross-domain FSS, weak-label FSS, and zero-shot segmentation. Our source code is available at https://github.com/chunbolang/BAM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call