Abstract
In the recent years, there have been remarkable improvements in the semantic segmentation based on deep convolutional neural networks. However, DCNN-based weakly supervised segmentation approaches are still inferior to the fully supervised manner. We observe that the performance gap mainly comes from the limitation of producing high-quality dense object localization clues from image-level labels. To mitigate this gap, in this paper, we are committed to finding more precise and complete pixel-level annotations from image-level tags. So this paper proposes a new iterative training framework for progressively refining pixel-wise labels and training the segmentation network. We first propose a new attention map generating method to locate more discriminative object regions. To find out less-discriminative regions and rectify wrong object location clues, the method fuses saliency map into the attention map to generate the initial pseudo pixel-level annotations. In the iteration training process, we train the segmentation network by treating pseudo pixel-level annotations as supervision. In order to correct the inaccurate labels of segmentation masks produced by current segmentation network, a superpixel-CRF refinement model is exploited to produce more accurate pixel-level annotations and use these annotations as supervision to train the segmentation network again. Our framework iterates between refining pixel-level annotations and optimizing the segmentation network. Experimental results demonstrate that our method significantly outperforms other previous weakly supervised semantic segmentation methods, and obtains the state-of-the-art performance, which are 64.7% mIoU score on PASCAL VOC 2012 test set and 26.3% mIoU score on MS COCO validation set.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.