Abstract

Background modeling is an important step for many video surveillance applications such as object detection and scene understanding. In this paper, we present a novel Pixel-to-Model (P2M) paradigm for background modeling in crowded scenes. In particular, the proposed method models the background with a set of context features for each pixel, which are compressively sensed from local patches. We determine whether a pixel belongs to the background according to the minimum P2M distance, which measures the similarity between the pixel and its background model in the space of compressive local descriptors. Moreover, the background updating utilizes minimum and maximum P2M distances to update the pixel feature descriptors in local and neighboring background models, respectively. We evaluate the proposed approach with foreground detection tasks on real crowded surveillance videos. Experiments results show that the proposed P2M approach outperforms the state-of-the-art methods both in indoor and outdoor crowded scenes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call