Abstract

AbstractMotion segmentation plays an important role in many applications including autonomous driving, computer vision and robotics. Previous works mainly focus on segmenting objects from seen videos. In this paper, we present a novel approach based on pixel distribution learning for motion segmentation in unseen videos. In particular, optical flow is extracted from consecutive frames to describe motion information. We then randomly permute these modified motion features, which are used as input of a convolution neural network. The random permutation process forces the network to learn the pixels’ distributions rather than local pattern information. Consequently, the proposed approach has a favorable generalization capacity and can be applied for unseen videos. In contrast to previous approaches based on deep learning, the training videos and testing videos of our proposed approach are completely different. Experiments based on videos from the KITTI-MOD dataset demonstrates that the proposed approach achieves promising results and shows potential for better motion segmentation on unseen videos. KeywordsMachine learning for multimediaMultimedia visionVideo processing

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.