Abstract

Good visual perception capability for object plays an important role in binary segmentation task, such as the segmentation for portraits and pulmonary nodules. When facing the same object in different backgrounds, humans always keep consistent visual perception. This has motivated semantic data augmentation strategies widely used in segmentation tasks. For example, ‘Cut-Paste’ strategy creates many images by changing background and assigns them to the same segmentation ground-truths for enhancing training. However, even if using these strategies, there are still differences among the segmentation results of images with the same object and different backgrounds. Hence, this paper proposes to adopt image-level classification and visual attention consistency under background-change to enhance the training of binary segmentation. The combination of image-level classification and class activation mapping can activate and visualize certain regions, which are related to classification label. The visual attention consistency requires the activated object attention to keep consistent when background of the input image changes. Based on this purpose, we augment the dataset by changing backgrounds with ‘Cut-Paste’. Afterwards, we adopt a shared triple-branch network to make original image, background-cut-out image and background image as inputs, and then propose image-level classification and attention consistency to train the binary segmentation network. Experimental results based on two datasets demonstrate that our method achieves new state-of-the-art binary segmentation performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call