Abstract

Saliency detection for images and videos becomes increasingly popular due to its wide applicability. Enormous research efforts have been focused on saliency detection, but it still has some issues in maintaining spatiotemporal consistency of videos and uniformly highlighting entire objects. To address these issues, this paper proposes a superpixel-level spatiotemporal saliency model for saliency detection in videos. To detect salient object, we extract multiple spatiotemporal features combined with intra-consistency motion information preliminarily. Meanwhile, considering inter-consistency of foreground in videos, a set of foreground locations are obtained from previous frames. Then, we introduce foreground-background and local foreground contrast saliency cues of those features using the location prior information of foreground. These two improved contrast saliency cues uniformly highlight the entire object and suppress the background effectively. Finally, we use an interactively dynamic fusion method to integrate the output spatial and temporal saliency maps. The proposed approach is validated on challenging sets of video sequences. Subjective observations and objective evaluations demonstrate that the proposed model achieves a better performance on saliency detection compared with the state-of-the-art spatiotemporal saliency methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call