Abstract

Image caption approaches that use the global Convolutional Neural Network (CNN) features are not able to represent and describe all the important elements in complex scenes. In this paper, we propose to enrich the semantic representations of images and update the language model by proposing semantic element embedding. For the semantic element discovery, an object detection module is used to predict regions of the image, and a captioning model, Long Short-Term Memory (LSTM), is employed to generate local descriptions for these regions. The predicted descriptions and categories are used to generate the semantic feature, which not only contains detailed information but also shares a word space with descriptions, and thus bridges the modality gap between visual images and semantic captions. We further integrate the CNN feature with the semantic feature into the proposed Element Embedding LSTM (EE-LSTM) model to predict a language description. Experiments on MS COCO datasets demonstrate that the proposed approach outperforms conventional caption methods and is flexible to combine with baseline models to achieve superior performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.