Abstract

Change/motion detection is a challenging problem in video analysis and surveillance system. Recently, the state-of-the-art methods using the sample-based background model have demonstrated astonishing results with this problem. However, they are ineffective in the dynamic scenes that contain complex motion patterns. In this paper, we introduce a novel data-driven approach that combines the sample-based background model with a feature extractor obtained by training a triplet network. We construct the network by three identical convolutional neural networks, each of which is called a motion feature network. Our network can automatically learn motion patterns from small image patches and transform input images of any size into feature embeddings for high-level representations. The sample-based background model of each pixel is then employed by using the color information and the extracted feature embeddings. We also propose an approach to generate triplet examples from CDNet 2014 for training our network model from scratch. The offline trained network can be used on the fly without re-training on any video sequence before each execution. Therefore, it is feasible for real-time surveillance systems. In this paper, we show that our method outperforms the other state-of-the-art methods on CDNet 2014 and other benchmarks (BMC and Wallflower).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.