Abstract

Background subtraction is an important step involved in solving computer vision problems. This paper proposes a novel background subtraction method with fully residual convolutional neural network (FR-CNN). This fully residual connection helps to fuse the fine scale and coarse scale feature information efficiently. The extracted non-handcrafted features are robust and promisingly efficient compared to the handcrafted features. Furthermore, the method uses temporal and spatial information for the background subtraction process. The optical flow image is used for extracting the temporal information. Additionally, a new background modeling technique is also proposed for the efficient background subtraction. The model is trained using the randomly selected 50 frames from each video sequence of the CDnet-2014 dataset and the FR-CNN model is evaluated by CDnet-2014 dataset. The results shown from the qualitative and quantitative analyses reveal that the FR-CNN model outperforms the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call