Abstract

The past decade has witnessed the success of transformed domain methods for image saliency prediction. However, it is intractable to develop a transformed domain method for video saliency prediction, due to the limited choices on spatio-temporal transforms. In this paper, we propose learning the transform from training data, rather than the predefined transform in the existing methods. Specifically, we develop a novel deep Complex-valued network with learnable Transform (DeepCT) for video saliency prediction. The architecture of DeepCT includes the Complex-valued Transform Module (CTM), inverse CTM (iCTM) and Complex-valued Stacked Convolutional Long Short-Term Memory network (CS-ConvLSTM). In the CTM and iCTM, multi-scale pyramid structures are introduced, as we find that transforms at multiple receptive scales can improve the accuracy of saliency prediction. To make the CTM and iCTM “invertible”, we further propose the cycle consistency loss in training DeepCT, which is composed of frame reconstruction loss and complex feature reconstruction loss. Additionally, the CS-ConvLSTM is developed to learn the temporal saliency transition across video frames. Finally, the experimental results show that our DeepCT method outperforms other 13 state-of-the-art methods for video saliency prediction.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.