Abstract
Initial background estimation in video processing serves as bootstrapping model for moving object detection based on background subtraction. In long-term videos, the initial background model may require a constant update as long as the environmental changes happen (slowly or suddenly). In this paper we approach the background initialization together with its constant updating in time by modeling video background as ever-changing states of weightless neural networks. The result is a background estimation method based on a weightless neural network, called BEWiS. The approach proposed in this work is simple: background estimation at each pixel is carried out by weightless neural networks designed to learn pixel color frequency during video play, and all networks share the same rule for memory retention during training. This approach has the advantage of providing a useful background model at the very beginning of the video, since it operates in unsupervised mode. On the other hand, depending on the video scene, the pixel-level learning rule can be tuned to tackle the specificities and difficulties of the scene. The approach presented in this work has been experimented on the public Scene Background Initialization 2015 dataset and on the Scene Background Modeling Contest 2016 dataset, and it showed a performance comparable or superior to state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
More From: Pattern Recognition Letters
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.