Abstract

In order to simplify interactive image segmentation, We propose a new interactive segmentation framework for cutting an object from its background by which user interaction is reduced to only one click. Our proposed interactive segmentation framework consists of two steps: the image is first segmented automatically and then the object is extracted via user interaction, achieving image interactive segmentation. Combining the color and texture features of an image, We propose automatic partitioning for the image on the basis of modularity optimization. We construct the image region similarity network and partition the network into communities. We propose several region selection strategies. The user only needs to provide an interaction click. The region where the user click is merged with its adjacent regions recursively in accordance with the region selection strategy, resulting in user-desired regions. The image is finally divided into the foreground and the background. Compared with existing interactive segmentation approaches, the proposed method uses the simplest user interaction: it does not simultaneously require foreground or background markers for input. We evaluate our framework on different public image datasets. The experimental results indicate that the proposed method is superior to all existing interactive segmentation approaches. Results show that our framework achieves 67.5% accuracy on Grabcut, 80.8% accuracy on BSD_SSDS and 78.6% accuracy on MSRC_HighQuality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call