Abstract

Unlike static images, a video contains not only visual features but also more semantic meanings or relationships between the objects and scenes due to its temporal attribute. There have been many attempts to describe spatial and temporal relationships in videos, but the encoder-decoder based models are not enough to capture detailed relationships in videos. Specifically, a video clip often consists of several shots that seem to be unrelated, and simple recurrent model suffer from these change of shots. Recently, some studies have introduced the approach describing visual relations with relational reasoning on visual question answering and action recognition tasks. In this paper, we introduce an approach to capture temporal relationship with non-local block and boundary-awareness system. We evaluate our approach on Microsoft Video Description Corpus (MSVD, YouTube2Text) dataset. Experimental results show that non-local block applied along the temporal axis can improve video captioning performance on the MSVD dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.