Abstract
With several applications in various different fields, computer vision can be found everywhere in our society. A fundamental problem of artificial intelligence (AI) is to give automatic description of the content in an image. In this paper, we focus on one of the visual recognition facets of computer vision, i.e., image captioning. To give the description for visual data has been studied for a long time but in the field of videos. In the recent few years, emphasis has been lead on still image description with natural text. Due to the recent advancements in the field of object detection, the task of scene description in an image has become easier. The aim of the paper was to train convolutional neural networks (CNN) with several hundreds of hyperparameters and apply it on a huge dataset of images (ImageNet) and combine the results of this image classifier with a recurrent neural network to create a caption for the classified picture. In this paper, we systematically analyze a deep neural networks-based image caption generation method. With an image as the input, the method can output an English sentence describing the content in the image. We analyze three components of the method: convolutional neural network (CNN), recurrent neural network (RNN) and sentence generation. By replacing the CNN part with three state-of-the-art architectures, we find the VGGNet which performs best according to the BLEU score. In this paper, we present the detailed architecture of the model used by us. We achieved a BLEU score of 56 on the Flickr8k dataset while the state-of-the-art results rest at 66 on the dataset.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have