Abstract

Motion detection is a basis step for video processing. Previous works of motion detection based on deep learning need clean foreground or background images which always do not exist in practice. To address this challenge, a novel and practical method is proposed based on auto-encoder neural networks. First, the approximate background images are obtained via an auto-encoder network (called Reconstruction Network) from video frames. Then, a background model is learned based on these images by using another auto-encoder network (called Background Network). To be more resilient, our background model can be updated on-line to absorb more training samples. Our main contributions are 1) the architecture of the couple of auto-encoder networks which can model the background very efficiently; 2) the online learning algorithm in which a method of searching the minimizing effect parameters is adopted to accelerate the training of the Reconstruction Network. Our approach improves the motion detection performance on three data sets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call