Abstract
Video grounding intends to perform temporal localization in multimedia information retrieval. The temporal bounds of the target video span are determined for the given input query. A novel interactive multi-head self-attention (IMSA) transformer is proposed to localize an unseen moment in the untrimmed video for the given image. A new semantic-trained self-supervised approach is considered in this paper to perform cross-domain learning to match the image query – video segment. It normalizes the convolution function enabling efficient correlation and collecting of semantically related video segments across time based on the image query. A double hostile Contrastive learning with Gaussian distribution parameters method is advanced to learn the representations of video. The proposed approach performs dynamically on various video components to achieve exact semantic synchronization and localization among queries and video. In the proposed approach, the IMSA model localizes frames greatly compared to other approaches. Experiments on benchmark datasets show that the proposed model can significantly increase temporal grounding accuracy. The moment occurrence is identified in the video with a start and end boundary ascertains an average recall of 86.45% and a mAP of 59.3%.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have