Abstract

In this work, we propose a 3D fully convolutional architecture for video saliency prediction that employs hierarchical supervision on intermediate maps (referred to as conspicuity maps) generated using features extracted at different abstraction levels. We provide the base hierarchical learning mechanism with two techniques for domain adaptation and domain-specific learning. For the former, we encourage the model to unsupervisedly learn hierarchical general features using gradient reversal at multiple scales, to enhance generalization capabilities on datasets for which no annotations are provided during training. As for domain specialization, we employ domain-specific operations (namely, priors, smoothing and batch normalization) by specializing the learned features on individual datasets in order to maximize performance. The results of our experiments show that the proposed model yields state-of-the-art accuracy on supervised saliency prediction. When the base hierarchical model is empowered with domain-specific modules, performance improves, outperforming state-of-the-art models on three out of five metrics on the DHF1K benchmark and reaching the second-best results on the other two. When, instead, we test it in an unsupervised domain adaptation setting, by enabling hierarchical gradient reversal layers, we obtain performance comparable to supervised state-of-the-art. Source code, trained models and example outputs are publicly available at https://github.com/perceivelab/hd2s.

Highlights

  • Video saliency prediction is the task of predicting human gaze fixation when perceiving dynamic scenes, and it is typically carried out by estimating spatio-temporal saliency maps from an input video sequence

  • Domain-Specific Batch Normalization aims at mitigating the impact of data distribution shift on the statistics estimated by batch normalization for inference, which may become inaccurate when computed over different benchmarks (Li et al 2016; Chang et al 2019; Droste et al 2020)

  • We propose HD2S, a new fully-convolutional network for video saliency prediction

Read more

Summary

Introduction

Video saliency prediction is the task of predicting human gaze fixation when perceiving dynamic scenes, and it is typically carried out by estimating spatio-temporal saliency maps from an input video sequence. In general, can be seen as the upstream processing step of multiple applications that include object detection (Girshick 2015), behavior understanding (Lim et al 2014; Lu et al 2017), video surveillance (Li and Lee 2007; Mark et al 2018; Guraya et al 2010; Yubing 2011) and video captioning (Nguyen et al 2013; Wang et al 2018a; Yangyu et al 2018).

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.