Abstract

Fully supervised semantic segmentation has performed well in many computer vision tasks. However, it is time-consuming because training a model requires a large number of pixel-level annotated samples. Few-shot segmentation has recently become a popular approach to addressing this problem, as it requires only a handful of annotated samples to generalize to new categories. However, the full utilization of limited samples remains an open problem. Thus, in this article, a mutually supervised few-shot segmentation network is proposed. First, the feature maps from intermediate convolution layers are fused to enrich the capacity of feature representation. Second, the support image and query image are combined into a bipartite graph, and the graph attention network is adopted to avoid losing spatial information and increase the number of pixels in the support image to guide the query image segmentation. Third, the attention map of the query image is used as prior information to enhance the support image segmentation, which forms a mutually supervised regime. Finally, the attention maps of the intermediate layers are fused and sent into the graph reasoning layer to infer the pixel categories. Experiments are conducted on the PASCAL VOC- 5i dataset and FSS-1000 dataset, and the results demonstrate the effectiveness and superior performance of our method compared with other baseline methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.