Abstract

The paper provides a specific perspective view on background subtraction for moving object detection, as a building block for many computer vision applications, being the first relevant step for subsequent recognition, classification, and activity analysis tasks. Since color information is not sufficient for dealing with problems like light switches or local gradual changes of illumination, shadows cast by the foreground objects, and color camouflage, new information needs to be caught to deal with these issues. Depth synchronized information acquired by low-cost RGBD sensors is considered in this paper to give evidence about which issues can be solved, but also to highlight new challenges and design opportunities in several applications and research areas.

Highlights

  • Background modeling is a critical component for motion detection tasks, and it is essential for most of modern video surveillance applications

  • The use of just depth data poses several problems that do not assure the required efficiency: (a) depth-based segmentation fails in case of depth camouflage that appears when foreground objects move towards the modeled background; (b) object silhouettes are strongly affected by the high level of depth data noise at object boundaries; (c) depth measurements are not always available for all the image pixels due to multiple reflections, scattering in particular surfaces, or occlusions

  • Color cameras are based on sensors like CCD or CMOS, which provide a reliable representation of the scene with high-resolution images. Background subtraction using this kind of sensors often results in a precise separation between foreground and background, even though well-known scene background modeling challenges for moving object detection must be taken into account [25,26]:

Read more

Summary

Introduction

Background modeling is a critical component for motion detection tasks, and it is essential for most of modern video surveillance applications. The use of just depth data poses several problems that do not assure the required efficiency: (a) depth-based segmentation fails in case of depth camouflage that appears when foreground objects move towards the modeled background; (b) object silhouettes are strongly affected by the high level of depth data noise at object boundaries; (c) depth measurements are not always available for all the image pixels due to multiple reflections, scattering in particular surfaces, or occlusions All these issues arose with several background modeling approaches based solely on depth as proposed in [6,7,8,9,10], mainly as building blocks for people-detection and tracking systems [11,12,13,14]. We provide the most extensive comparison of the existing methods on some datasets

RGBD Data and Related Issues for Background Subtraction
Methods
Metrics
Datasets
Background subtraction
Comparisons
Comparisons on the MULTIVISION Kinect Dataset
Method
Comparisons on the MULTIVISION Stereo Dataset the Stereo Dataset
Comparisons on the RGB-D Object Detection Dataset
Comparisons on the GSM Dataset
Conclusions and Future Research
Background
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call