Abstract

In this paper, we propose a video object segmentation method via global consistency aware query strategy. The aim is to obtain higher segmentation accuracy with less user annotation. Intuitively, we hope to annotate some frames to obtain better segmentation performance than to annotate other frames, which can be modeled by active learning framework. Specifically, we first generate a sample space of potential annotation regions via an object proposals method for each frame. Then, the annotation likelihood for the region is calculated in terms of annotation history and global consistency for the object in the video. Third, the segmentation result of the annotation region can be obtained by minimizing an MRF energy function. Fourth, the algorithm will provide the user with the most valuable frame to annotate, which has high annotation likelihood and large segmentation result change. Finally, the annotation is added to the framework to begin the next iteration. Experiments on a number of video sequences demonstrate that the proposed method can reduce the user effort and obtain the higher segmentation accuracy compared with the state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call