Abstract

Nowadays, deep learning is impacting computer vision applications significantly by learning to extract and describe the essential features that can assist in solving a specific problem. One of the main drawbacks of deep learning-based approaches is their requirement of annotated training data. For instance, in the area of background subtraction, a sequence of annotated training data that demonstrates the foreground and background of the scene is required; such a sequence is required from different scenarios the module performs. Hence, such methods cannot perform directly in new environments, and they expect training data from the new scene. In order to benefit from the high generalization ability and easy-to-use capability of traditional background subtraction techniques, and rich feature representation of deep learning approaches, this paper utilized deep learning based techniques and investigated the best image representation that can assist conventional background subtraction approaches. The proposed image representation is based on the detection of objects which are semantically moving and inpainting the parts of the images related to such objects. Experiments and results show that the proposed image representation can have the online and portable capability of traditional techniques and improve their accuracy by approximately 20% when a conventional type of monocular camera is used.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call