Abstract

Moving object detection is a fundamental task in many video processing applications, such as video surveillance. The robustness and efficiency of background subtraction make it one of the most common methods for detecting moving objects from a video stream. However, adapting background models with moving cameras remains challenging. Major issues include maintaining the model given viewpoint alterations and compensating motions considering depth variations. Moreover, gradual illumination changes, dynamic backgrounds, and complex motions intensify over time in moving camera scenarios, further complicating background model maintenance. In this context, this paper proposes a novel Robust and Online Tensor-based model named ROTAB that incorporates a more implicit consideration of the relationship between sequential frames than the previous methods, allowing for better adaptation to background changes. Moreover, we propose an Improved version of FISTA named IFISTA that employs two strategies to reduce oscillatory behavior and minimize iterations, improving stability and efficiency. Practically, the combination of IFISTA and ROTAB (IFISTA-ROTAB) demonstrates suitable performance for real-time applications. Quantitative and qualitative experiments are conducted on three large-scale datasets, namely CDnet 2014, BMC 2012 and LASIESTA showing the superiority of IFISTA-ROTAB with a gain from two up to seven percent in average.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call