Abstract

Background modeling and subtraction, the task to detect moving objects in a scene, is a fundamental and critical step for many high level computer vision tasks. However, background subtraction modeling is still an open and challenge problem particularly in practical scenarios with drastic illumination changes and dynamic backgrounds. In this paper, we propose a novel foreground detection method based on CNNs to deal with challenges confronted with background subtraction. Firstly, given the cleaned background image without moving objects, constructing adjustable neighborhood of each pixel in the background image to form windows; CNN features are extracted with pre-trained CNN model for each window to form the feature based background model. Secondly, for the current frame of a video scene, extracting features with the same operation as the background model. Euclidean distance is adopted to build the distmap for current frame and background image with CNN features. Thirdly, the distmap is fed into graph cut algorithm to obtain the foreground mask. In order to deal with the background changes, the background model is updated with a certain rate. Experimental results verify that the proposed approach is effective to detect foreground objects from complex background environments, and outperforms some state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.