Abstract

Video segmentation has been widely applied in many fields, such as motion identification, target tracking, video retrieval and video editing. We have proposed an approach of video segmentation, which combines the color feature, shape feature and motion information of the target together. Firstly, we introduce tiny amount of interactions into the processing of key frame, obtain the accurate contour, and then initialize the local classifiers. Secondly, we use the patch-based sparse matching, which's also called patch matching in the following context, to pass the contour of the last frame to the current frame, as a result, initial contour of the target is got estimated. Simultaneously, the position parameters are updated. Eventually, we calculate the foreground and background probability distributions of current frame through the global probability models and local classifiers, then construct the enhanced Onecut model to obtain its segmentation results. Compared with the state-of-art video segmentation methods, our proposed approach performs outstandingly on the DAVIS dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.