Abstract

Background modeling and subtraction is a classical topic in compute vision. Gaussian mixture modeling (GMM) is a popular choice for its capability of adaptation to background variations. Lots of improvements have been made to enhance the robustness by considering spatial consistency and temporal correlation. In this paper, we propose a sharable GMM based background subtraction approach. Firstly, a sharable mechanism is presented to model the many-to-one relationship between pixels and models. Each pixel dynamically searches the best matched model in the neighborhood. This kind of space-sharing way is robust to camera jitter, dynamic background, etc. Secondly, the sharable models are built for both background and foreground. The noises resulted by local small movements could be effectively eliminated through the background sharable models, while the integrity of moving objects is enhanced by the foreground sharable models, especially for small objects. Finally, each sharable model is updated through randomly selecting a pixel which matches this model. And a flexible mechanism is added for switching between background and foreground models. Experiments on ChangeDetection benchmark dataset demonstrate the effectiveness of our approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call