Abstract

We propose an interactive object segmentation method which learns feature-specific segmentation parameters based on a single image. The first step is to design discriminative features for each pixel, which integrate four kinds of cues, i.e, the color Gaussian mixture model (GMM), the graph learning-based attribute, the texture GMM, and the geodesic distance. Then we formulate the segmentation problem as a conditional random field model in terms of fusing multiple features. While an image-specific parameter setting is practical in interactive segmentation, the efficiency of learning process highly depends on the type of user interaction and the designed features. We propose a feature-specific parameter learning strategy to learn model parameters, in which an offline training stage is not required and parameters are computed according to some sparsely labeled pixels on the basis of a single image. Extensive experiments show that the proposed segmentation model performs well for segmenting images with a weak boundary, texture, or cluttered background. Comparative experiment results demonstrate that our method can achieve both qualitative and quantitative improvements over other state-of-the-art interactive segmentation methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call