Abstract

Depth-sensing technology has led to broad applications of inexpensive depth cameras that can capture human motion and scenes in three-dimensional space. Background subtraction algorithms can be improved by fusing color and depth cues, thereby allowing many issues encountered in classical color segmentation to be solved. In this paper, we propose a new fusion method that combines depth and color information for foreground segmentation based on an advanced color-based algorithm. First, a background model and a depth model are developed. Then, based on these models, we propose a new updating strategy that can eliminate ghosting and black shadows almost completely. Extensive experiments have been performed to compare the proposed algorithm with other, conventional RGB-D (Red-Green-Blue and Depth) algorithms. The experimental results suggest that our method extracts foregrounds with higher effectiveness and efficiency.

Highlights

  • In recent years, enormous amounts of data on human or animal behavior have been collected using 2D and 3D cameras, and automated methods for detecting and tracking individuals or animals have begun to play an important role in studies of experimental biology, behavioral science, and related disciplines

  • We propose an adaptive Visual Background Extractor (ViBe) background subtraction algorithm that fuses the depth and color information obtained by the Kinect sensor to segment foreground regions

  • We compared the results of our method with those obtained using two state-of-the-art RGB-D fusion-based background subtraction algorithms, namely, MOG4D [18] and DECB [19], as well as the ViBe algorithm on the color images [8] (ViBe) and the ViBe based only on depth (ViBe1D)

Read more

Summary

Introduction

Enormous amounts of data on human or animal behavior have been collected using 2D and 3D cameras, and automated methods for detecting and tracking individuals or animals have begun to play an important role in studies of experimental biology, behavioral science, and related disciplines. Most of the conventional methods mentioned above were designed for application to color images Depth is another interesting cue for segmentation that is less strongly affected by the adverse effects encountered in classical color segmentation, such as shadow and highlight regions. In [20], the method presented combines a mixture of Gaussian-based background subtraction algorithms with a new Bayesian network. The Gaussian mixture model is observed for each pixel over a sequence of frames These existing RGB-D (RGB and Depth) segmentation algorithms either suffer from ghosts, such as Depth-Extended Codebook ( DECB) [19] and 4D version of Mixture of Gaussians (MOG4D) [18], or fail to achieve real-time performance [21]. We propose an adaptive ViBe background subtraction algorithm that fuses the depth and color information obtained by the Kinect sensor to segment foreground regions.

ViBe Background Subtraction Algorithm
Pixel Model
Classification Process
Updating the Background Model over Time
Depth Model
Fusion Algorithm for Color and Depth Images
Experiments and Results
Conclusions
Background
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call