Abstract

In this paper, a self-propagating video segmentation approach based on patch matching and enhanced Onecut is proposed, which takes full advantage of the target’s color feature, shape feature, and motion information. Firstly, an interactive key frame segmentation is performed to obtain the accurate initial contour of target. Secondly, some sampling patches are uniformly selected along the target contour of previous frame to initialize the localized classifiers. Afterwards, the patch matching is used to pass this contour to the current frame. Simultaneously, the localized classifiers are moved to current frame similarly, and their corresponding positions and parameters are also updated. Eventually, the foreground and background probability maps of current frame are calculated through the localized classifiers as well as global probability models, and then the enhanced Onecut model is constructed to obtain its segmentation result. Compared with the state-of-the-art video segmentation methods, our proposed approach performs outstandingly on the DAVIS dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call