Abstract

Background modelling plays an important role in detecting foreground objects for video analysis. Many background subtraction methods have been proposed in the past two decades, such as Gaussian Mixture Models (GMM) and Running Averages. Since these per-pixel approaches update the background at the pixel level, they are prone to false foreground and background classifications which may results in foreground detection problems. For example, a slow moving object or one with intermittent motion may be erroneously incorporated into the background model. Also, these models typically assume a clean background image at initialization, which is difficult to achieve in real world scenario, leading to the 'bootstrapping' challenge. These issues can be addressed by using high level object tracking information as an analysis operation, and feeding back into a per-pixel model. This paper describes a method to model backgrounds using higher level knowledge of object movements derived from a robust tracker. Experimental results reveal that our method works well and outperforms state of the art background subtraction methods such as GMM and running averages in a scene with bootstrapping and intermittent object motion background modelling challenges.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call