Abstract

Spatiotemporal attention learning for video question answering (VideoQA) has always been a challenging task, where existing approaches treat the attention parts and the nonattention parts in isolation. In this work, we propose to enforce the correlation between the attention parts and the nonattention parts as a distance constraint for discriminative spatiotemporal attention learning. Specifically, we first introduce a novel attention-guided erasing mechanism in the traditional spatiotemporal attention to obtain multiple aggregated attention features and nonattention features and then learn to separate the attention and the nonattention features with an appropriate distance. The distance constraint is enforced by a metric learning loss, without increasing the inference complexity. In this way, the model can learn to produce more discriminative spatiotemporal attention distribution on videos, thus enabling more accurate question answering. In order to incorporate the multiscale spatiotemporal information that is beneficial for video understanding, we additionally develop a pyramid variant on basis of the proposed approach. Comprehensive ablation experiments are conducted to validate the effectiveness of our approach, and state-of-the-art performance is achieved on several widely used datasets for VideoQA.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.