Abstract

This paper presents an integrated background subtraction and shadow detection algorithm to identify background, shadow, and foreground regions in a video sequence, a fundamental task in video analytics. The background is modeled at pixel level with a collection of previously observed background pixel values. An input pixel is classified as background if it finds the required number of matches with the samples in the model. The number of matches required with the samples in the model to classify an incoming pixel as background is continuously adapted at pixel level according to the stability of pixel observations over time, thereby making better use of samples in dynamic as well as stable regions of the background. Pixels which are not classified as background in the background subtraction step are compared with a pixel-level shadow model. The shadow model is similar to the background model in that it consists of actually observed shadowed pixel values. Sample-based shadow modeling is a novel approach that solves the highly difficult problem of accurately modeling all types of shadows. Shadow detection by matching with the samples in the model exploits the recurrence of similar shadow values at pixel level. Evaluation tests on various public datasets demonstrate near state-of-the-art background subtraction and state-of-the-art shadow detection performance. Even though the proposed method contains shadow detection processing, the implementation cost is small compared with existing methods.

Highlights

  • The use of change detection algorithms to automatically segment a video sequence from a stationary camera into background and foreground regions is a crucial first step in several computer vision applications

  • The decision process to classify an incoming pixel as background or foreground is a consensus: a pixel at location x in the input frame is classified as background if it matches with at least #min(x) of the N samples in its background model

  • This shadow-induced variation in pixel value is otherwise highly difficult to characterize because it depends on a huge number of factors such as nature and number of other light sources and reflective properties of the surface as well as that of other scene objects and texture properties of the surface. Since these factors which determine the shadow pixel values are more or less stable, the values that a given pixel takes, when shadowed by different foreground objects, show a certain agreement even in complex illumination conditions [32]. Based on this rationale that shadow pixel values repeat at pixel level, each pixel in the initial shadow mask M1 is compared with the pixel-level shadow model and those pixels which find at least #min matches with the samples in the shadow model are labeled as shadow in the final shadow mask M2

Read more

Summary

Introduction

The use of change detection algorithms to automatically segment a video sequence from a stationary camera into background and foreground regions is a crucial first step in several computer vision applications. Results from this low-level task are often used for higher level tasks such as tracking, counting, recognition, and classification. A good change detection algorithm should classify such regions of irrelevant motion as background to exclude them from further analysis. An additional major challenge for change detection algorithms is presented by cast shadows which accompany foreground objects. Unless explicit handling is done, background subtraction algorithms tend to classify cast shadows as part of the foreground, detrimental to the subsequent stages of analysis. Post-processing operations like morphological or median filtering are often used to ensure some spatial consistency to the segmentation results

Related methods
Proposed method: background subtraction
Results and discussions
Method
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call