Abstract

Video captioning is currently considered to be one of the simplest ways to index and search data efficiently. In today’s era, suitable captioning of video images can be facilitated with deep learning architectures. The focus of past research has been on providing image captions; however, the generation of high-quality captions with suitable semantics for different scenes has not yet been achieved. Therefore, this work aims to generate well-defined and meaningful captions to images and videos by using convolutional neural networks (CNN) and recurrent neural networks in combination. Beginning with the available dataset, features of images and videos were extracted using CNN. The extracted feature vectors were then utilized to generate a language model with the involvement of long short-term memory for individual word grams. The generated meaningful captions were trained using a softmax function, for performance computation using some predefined evaluation metrics. The obtained experimental results demonstrate that the proposed model outperforms existing benchmark models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call