Abstract

Temporal sentence grounding (TSG) is crucial and fundamental for video understanding. Although existing methods train well-designed deep networks with large amount of data, we find that they can easily forget the rarely appeared cases during training due to the off-balance data distribution, which influences the model generalization and leads to unsatisfactory performance. To tackle this issue, we propose a memory-augmented network, called Memory-Guided Semantic Learning Network (MGSL-Net), that learns and memorizes the rarely appeared content in TSG task. Specifically, our proposed model consists of three main parts: cross-modal interaction module, memory augmentation module, and heterogeneous attention module. We first align the given video-query pair by a cross-modal graph convolutional network, and then utilize memory module to record the cross-modal shared semantic features in the domain-specific persistent memory. During training, the memory slots are dynamically associated with both common and rare cases, alleviating the forgetting issue. In testing, the rare cases can thus be enhanced by retrieving the stored memories, leading to better generalization. At last, the heterogeneous attention module is utilized to integrate the enhanced multi-modal features in both video and query domains. Experimental results on three benchmarks show the superiority of our method on both effectiveness and efficiency, which substantially improves the accuracy not only on the entire dataset but also on the rare cases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call