Abstract

The objective of foreground segmentation is to extract the desired foreground object from input videos. Over the years there have been significant amount of efforts on this topic, nevertheless there still lacks a simple yet effective algorithm that can process live videos of objects with fuzzy boundaries captured by freely moving cameras. This paper presents an algorithm toward this goal. The key idea is to train and maintain two competing one-class support vector machines (1SVMs) at each pixel location, which model local color distributions for foreground and background, respectively. We advocate the usage of two competing local classifiers, as it provides higher discriminative power and allows better handling of ambiguities. As a result, our algorithm can deal with a variety of videos with complex backgrounds and freely moving cameras with minimum user interactions. In addition, by introducing novel acceleration techniques and by exploiting the parallel structure of the algorithm, realtime processing speed is achieved for VGA-sized videos.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.