Abstract

We present a refinement framework for background subtraction based on color and depth data. The foreground objects are segmented based on color and depth data independently, in which all of the existed background subtraction (BGS) methods can be applied. The two detected foregrounds will be very inaccurate in some situations such as shadowing and color camouflage. We focus our works on refining the inaccurate results by a supervised learning way. We propose to re-extract features from the source color and depth data. The features together with the initial detection results are fed to classifiers to obtain a better foreground detection. Experiments show that our method can take full advantage of the both information to detect foreground in color camouflage and shadowing situations, giving a promising result which is robust to the inaccurate initial detections, and outperforming the state-of-art algorithms that based on color and depth data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call