Abstract

We present a novel method for online background modeling for static video cameras - Dynamic Spatial Predicted Background (DSPB). Our unique method employs a small subset of image pixels to predict the whole scene by exploiting pixel correlations (distant and close). DSPB acts as a hybrid model combining successful elements taken from two major approaches: local-adaptive that propose to fit a distribution pixelwise, and global-linear that reconstruct the background by finding a lowrank version of the scene. To our knowledge, this is the first attempt to combine these approaches in a unified system. DSPB models the scene as a superposition of illumination effects and predicts each pixel's value by a linear estimator comprised of only 5 pixels of the scene and can initialize the background starting from the 5th frame. By doing so, we keep the computational load low, allowing our method to be used in many real-time applications using simple hardware. The suggested prediction model of scene appearance is novel, and the scheme is very accurate and efficient computationally. We show the method merits on an application for video FG-BG separation, and how some of the main existing approaches may be challenged and how their drawbacks are less dominant in our model. Experimental results validate our findings, by computation speed and mean F-measure values on several public datasets. We also examine how results may improve by analyzing each video individually according to its content. DSPB can be successfully incorporated in other image processing tasks like change detection, video compression and video inpainting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call