Abstract

Vlogs, Recordings, news, sport coverages are huge sources of multimodal information that do not just limit to text but extend to audio, images and videos. Applications such as summary generation, image/video captioning, multimodal sentiment analysis, cross modal retrieval requires Computer Vision along with Natural Language Processing techniques to extract relevant information. Information from different modalities must be leveraged in order to extract quality content. Hence, reducing the gap between different modalities is of utmost importance. Image to text conversion is an emerging field and employs the use of encoder decoder architecture. Deep CNNs extract the feature of images and sequence to sequence models are used to generate text description. This paper is a contribution towards the growing body of research in multimodal information retrieval. In order to generate the textual description of images, we have performed 5 experiments using the benchmark Flickr8k dataset. In these experiments we have utilized different architectures - simple sequence to sequence model, attention mechanism, transformer-based architecture to name a few. The results have been evaluated using BLEAU score. Results show that the best descriptions are attained by making use of transformer architecture. We have also compared our results with the pretrained visual model vit-gpt2 that incorporates visual transformer.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call