Abstract
Weakly supervised semantic segmentation task aims to learn a segmentation model with only image-level annotations. Existing methods generally refine the initial seeds to obtain pseudo labels for training a fully supervised model. In recent years, some affinity-based methods perform well in this task. However, most of these methods only focus on the localization information from class activation map, while ignoring rule-based appearance information. In this paper, we find that the superpixel guidance is helpful for mining semantic affinities between pixels because pixels belonging to the same superpixel often have the same class label. As such, we propose a Superpixel Guided Weakly Segmentation framework, which alternately learns two modules to fuse superpixel information and localization information. The semantic segmentation results are more consistent with the image’s local and global consistency through our framework. Experiments show that the proposed method achieves state-of-the-art performance, with mIoU at 70.5% on the PASCAL VOC 2012 test set and mIoU at 34.4% on the MS-COCO 2014 val set.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.