Abstract

This paper addresses the problem of foreground and background segmentation. Multi-modal data specifically RGBD data has gain many tasks in computer vision recently. However, techniques for background subtraction based only on single-modality still report state-of-the-art results in many benchmarks. Succeeding the fusion of depth and color data for this task requires a robust formalization allowing at the same time higher precision and faster processing. To this end, we propose to make use of kernel density estimation technique to adapt multi-modal data. To speed up kernel density estimation, we explore the fast Gauss transform which allows the summation of a mixture of M kernel at N evaluation points in O(M+N) time as opposed to O(MN) time for a direct evaluation. Extensive experiments have been carried out on four publicly available RGBD foreground/background datasets. Results demonstrate that our proposal outperforms state-of-the-art methods for almost all of the sequences acquired in challenging indoor and outdoor contexts with a fast and non-parametric operation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call