Abstract

Video as information carrier has gained overwhelming popularity in city surveillance and social networks, such as WeChat, Weibo, and TikTok. To bridge the semantic gap between video content (e.g., user and landmark building) and textual information (e.g., user location), video captioning has emerged as an attracting technique in recent years. Existing works mostly focus on sentence-level Part-of-Speech (POS) information and use Long Short-Term Memory (LSTM) as encoder, which neglects word or phrase-level POS information and also fails to globally consider long-range temporal relations among video frames. To address the drawbacks, we leverage multi-granularity POS guidance to learn Graph Convolutional Network (GCN) via meta-learning, abbreviated as GMMP (GCN Meta-learning with Multi-granularity POS), for generating high-quality captions for videos. It models temporal dependency by treating frames as nodes in the graph, and captures POS information of words and phrases by multi-granularity POS attention mechanism. We adopt meta-learning to better learn GCN by maximizing the reward of generated caption in a reinforcement task and also the probability of ground-truth caption in a supervised task, simultaneously. Experiments have verified the advantages of our GMMP model on several benchmark data sets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call