Abstract
Background subtraction is the fundamental step of moving object detection. As current methods use specific data-based or data learning methods to represent background, this will induce the problem of model distortion while the scenarios cannot meet their assumptions or model conditions. A novel data-driven framework is proposed to represent the background using the intrinsic characteristics of the background. In the framework, the model-free adaptive control method is used as an instance to analyze the background by its status of the nearest time instants, and linearize them dynamically from a pure data perspective. To overcome the occlusion of foreground objects, the selective update method is employed to satisfy the background update. Experiments are carried out under different video conditions to compare algorithm performance with state-of-the-art background models. The results show that the proposed method has reached over 95% in F-measure and percentage of correct classification in most cases, which is better than other state-of-the-art methods. Furthermore, the proposed method shows better robustness in severe video conditions, including bad weather and night cases, and its simplified data-driven control laws make it suitable for outdoor video surveillance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of the Optical Society of America. A, Optics, image science, and vision
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.