Abstract

Most video captioning networks rely on recurrent models, including long short-term memory (LSTM). However, these recurrent models have a long-range dependency problem; thus, they are not sufficient for video encoding. To overcome this limitation, several studies investigated the relationships between objects or entities and have shown excellent performance in video classification and video captioning. In this study, we analyze a video captioning network with a non-local block in terms of temporal capacity. We introduce a video captioning method to capture long-range temporal dependencies with a non-local block. The proposed model independently uses local and non-local features. We evaluate our approach on a Microsoft Video Description Corpus (MSVD, YouTube2Text) dataset. The experimental results show that a non-local block applied along the temporal axis can solve the long-range dependency problem of the LSTM in video captioning datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call