Abstract
Motion detection is a basis step for video processing. Previous works of motion detection based on deep learning need clean foreground or background images which always do not exist in practice. To address this challenge, a novel and practical method is proposed based on auto-encoder neural networks. First, the approximate background images are obtained via an auto-encoder network (called Reconstruction Network) from video frames. Then, a background model is learned based on these images by using another auto-encoder network (called Background Network). To be more resilient, our background model can be updated on-line to absorb more training samples. Our main contributions are 1) the architecture of the couple of auto-encoder networks which can model the background very efficiently; 2) the online learning algorithm in which a method of searching the minimizing effect parameters is adopted to accelerate the training of the Reconstruction Network. Our approach improves the motion detection performance on three data sets.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.