Abstract

Background modeling and subtraction, the task to detect moving objects in a scene, is a fundamental and critical step for many high level computer vision tasks. However, background subtraction modeling is still an open and challenge problem particularly in practical scenarios with drastic illumination changes and dynamic backgrounds. In this paper, we propose a novel foreground detection method based on CNNs(Convolutional Neural Networks) to deal with challenges confronted with background subtraction. Firstly, given a cleaned background image without moving objects, constructing adjustable neighborhood of each pixel in the background image to form windows; CNN features are extracted with a pre-trained CNN model for each window to form a features based background model. Secondly, for the current frame of a video scene, extracting features with the same operation as the background model. Euclidean distance is adopted to build distance map for current frame and background image with CNN features. Thirdly, the distance map is fed into graph cut algorithm to obtain foreground mask. In order to deal with background changes, the background model is updated with a certain rate. Experimental results verify that the proposed approach is effective to detect foreground objects from complex background environments, and outperforms some state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call