Abstract

ABSTRACT Both background subtraction and foreground extraction are the typical methods used to detect moving objects in video sequences. In order to flexibly represent the long-term state and the short-term changes in a scene, a new weighted Kernel Density Estimation (KDE) is proposed to build the long-term background (LTB) and short-term foreground (STF) models, respectively. A novel mechanism is proposed to support the interaction between the LTB and STF models. The interaction includes the weight transmission and the fusion between the LTB and STF models. In the weight transmission process between the LTB and STF models, the sample weight of one model (either the background model or the foreground model) in the current time step is updated under the guidance of the decision of the other model in the previous time step. In the background-foreground fusion stage, a unified Bayesian framework is proposed to detect objects and the detection result in any time step is given by the logarithm of the posterior ratio between the background and foreground models. This interactive approach proposed in this paper improves the robustness of moving object detection, preventing deadlocks and degeneration in the models. The experimental results demonstrate that our proposed approach outperforms previous ones.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call