Abstract

ABSTRACT Motion segmentation in dynamic scenes is currently widely dominated by parametric methods based on deep neural networks. The present study explores the unsupervised segmentation approach that can be used in the absence of training data to segment new videos. In particular, it tackles the task of dynamic texture segmentation. By automatically assigning a single class label to each region or group, this task consists of clustering into groups complex phenomena and characteristics which are both spatially and temporally repetitive. We present an effective fusion framework for motion segmentation in dynamic scenes (FFMS). This model is designed to merge different segmentation maps that contain multiple and weak quality regions in order to achieve a more accurate final result of segmentation. The diverse labelling fields required for the combination process are obtained by a simplified grouping scheme applied to an input video (on the basis of a three orthogonal planes: , and ). Experiments conducted on two challenging datasets (SynthDB and YUP++) show that, contrary to current motion segmentation approaches that either require parameter estimation or a training step, FFMS is significantly faster, easier to code, simple and has limited parameters.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.