Abstract

Background information is crucial for many video surveillance applications such as object detection and scene understanding. In this paper, we present a novel pixel-to-model (P2M) paradigm for background modeling and restoration in surveillance scenes. In particular, the proposed approach models the background with a set of context features for each pixel, which are compressively sensed from local patches. We determine whether a pixel belongs to the background according to the minimum P2M distance, which measures the similarity between the pixel and its background model in the space of compressive local descriptors. The pixel feature descriptors of the background model are properly updated with respect to the minimum P2M distance. Meanwhile, the neighboring background model will be renewed according to the maximum P2M distance to handle ghost holes. The P2M distance plays an important role of background reliability in the 3-D spatial–temporal domain of surveillance videos, leading to the robust background model and recovered background videos. We applied the proposed P2M distance for foreground detection and background restoration on synthetic and real-world surveillance videos. Experimental results show that the proposed P2M approach outperforms the state-of-the-art approaches both in indoor and outdoor surveillance scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call