Abstract

Moving object segmentation (MOS) is one of the important and well-studied computer vision problems. It is used in applications such as video surveillance systems, human tracking, self-driving cars, and video compression. Traditional approaches solve this problem by using hand-crafted features and then modeling the background by using these features. Convolutional Neural Networks (CNNs), on the other hand, have proven to be more powerful than traditional methods in extracting features. In this work, a hybrid system is presented that contains flux tensors together with 3D CNN, enhancing the performance of the algorithm on the unseen videos. 3D CNN can extract spatial and temporal features, thus exploiting motion information between adjacent frames. Motion entropy feature maps extracted by 3D CNN and the output of the flux tensor are jointly fed into an encoder-decoder network. ChangeDetection 2014 dataset is used for both training and test stages. Training and test videos are selected separately, and the networks are tested on unseen videos. Our proposed network gives promising segmentation results, which are competitive with existing methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.