Abstract

AbstractAutomatic image captioning systems assign one or more sentences to images to describe their visual content. Most of these systems use attention‐based deep convolutional neural networks and recurrent neural networks (CNN‐RNN‐Att). However, they must optimally use latent variables and side information within the image concepts. This study aims to integrate a latent variable into image captioning using CNN‐RNN‐Att. A Bayesian modeling framework is used for this work. As an instance of a latent variable, High‐Level Semantic Concepts (HLSCs) of tags are used to implement the proposed model. The Bayesian model output interpretation is to localize the entire image description process and breaks it down into sub‐problems. Thus, a baseline descriptor subnet is trained independently for each sub‐problem, and it is the only expert in captioning for a given HLSC. The final output is the caption derived from the subnet; its HLSC is closest to the image content. The results indicate that CNN‐RNN‐Att applied to data localized using HLSCs improves the captioning accuracy of the proposed method, which can be compared to the latest state‐of‐the‐art and most accurate captioning systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.