Abstract

We study the problem of unknown foreground discovery in image streaming scenarios, where no prior information about the dynamic scene is assumed. Contrary to existing co-segmentation principles where the entire dataset is given, in streams new information emerges as content appears and disappears continually. Any object classes to be observed in the scene are unknown, therefore no detection model can be trained for the specific class set. We also assume there is no available repository of trained features from convolutional neural nets, i.e., transfer learning is not applicable. We focus on the progressive discovery of foreground, which may or may not correspond to contextual objects of interest, depending on the camera trajectory, or, in general, the perceived motion. Without any form of supervision, we construct in a bottom up fashion dynamic graphs that capture region saliency and relative topology. Such graphs are continually updated over time, and along with occlusion information, as fundamental property of the foreground–background relationship, foreground is computed for each frame of the stream. We validate our method using indoor and outdoor scenes of varying complexity with respect to content, objects motion, camera trajectory, and occlusions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call