Abstract

Background modeling and subtraction is a classical topic in compute vision. Gaussian mixture modeling (GMM) is a popular choice for its capability of adaptation to background variations. Lots of improvements have been made to enhance the robustness by considering spatial consistency and temporal correlation. In this paper, we propose a sharable GMM based background subtraction approach. Firstly, a sharable mechanism is presented to model the many-to-one relationship between pixels and models. Each pixel dynamically searches the best matched model in the neighborhood. This kind of space-sharing way is robust to camera jitter, dynamic background, etc. Secondly, the sharable models are built for both background and foreground. The noises resulted by local small movements could be effectively eliminated through the background sharable models, while the integrity of moving objects is enhanced by the foreground sharable models, especially for small objects. Finally, each sharable model is updated through randomly selecting a pixel which matches this model. And a flexible mechanism is added for switching between background and foreground models. Experiments on ChangeDetection benchmark dataset demonstrate the effectiveness of our approach.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.