Abstract

This paper describes a new background subtraction method for detecting moving foreground objects in video from a static camera. The background model assumes that for each frame, every pixel is described as either an image patch, or a texture patch. An image patch describes the local appearance of an image, and is suitable when the background is structured and of fixed appearance. A texture patch is useful for describing a pixel which is part of a larger region containing a consistent patterning of pixels, but for which the local appearance is either indistinct (e.g. uniform regions), or time varying (e.g. the surface of the sea). Textures tend to correspond to regions rather than specific locations, so the texture descriptions are calculated globally (i.e. over the entire image). Over time, each pixel is represented as a weighted mixture of modes, as in the Stauffer and Grimson method. But instead of each mode being an image intensity with a variance, each mode now corresponds to either an image or a texture patch. The proposed background model is applied to the nine video sequences from the I2R data set, and compared to the provided foreground ground-truth. When the error is measured on a per pixel basis, the proposed method performs better than the alternative methods tested.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.