Abstract

Traditional background subtraction algorithms assume the camera is static and are based on simple per-pixel models of scene appearance. This leads to false detections when the camera moves. While this can sometimes be addressed by online image registration, this approach is prone to dramatic failures and long-term drift. We present a novel background subtraction algorithm designed for pan-tilt-zoom cameras that overcomes this challenge without the need for explicit image registration. The proposed algorithm automatically trains a discriminative background model, which is global in the sense that it is the same regardless of image location. Our approach first extracts multiple features from across the image and uses principal component analysis for dimensionality reduction. The extracted features are then grouped to form a Bag of Features. A global background model is then learned from the bagged features using Support Vector Machine. The proposed approach is fast and accurate. Having a single global model makes it computationally inexpensive in comparison to traditional pixel-wise models. It outperforms several state-of-the-art algorithms on the CDnet 2014 pan-tilt-zoom and baseline categories and Hopkins155 dataset. In particular, it achieves an F-Measure of 75.41% on the CDnet dataset PTZ category, significantly better than the previously reported best score of 62.07%. These results show that by removing the coupling between detection model and spatial location, we significantly increase the robustness to camera motion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call