Abstract

For the task of image annotation, traditional methods based on probabilistic topic model, such as correspondence Latent Dirichlet Allocation (corrLDA) [1], assumes that image is a mixture of latent topics. However, this kind of models is unable to directly model correlation between topics since topic proportions of an image are generated independently. Our model, called correspondence Correlated Topic Model (corrCTM), extends Correlated Topic Model (CTM) [2] from natural language processing to capture topic correlation from covariance structure of more flexible model distribution. Unlike previous LDA based models, topic proportions are correlated with each other in proposed corrCTM. And the topic correlation propagates from image features to annotation words through a generative process, and finally correspondence between images and words could be generated. We derive an approximate inference and estimation algorithm based on variational method. We examine the performance of our model on two benchmark image datasets, show improved performance over corrLDA for both annotation and modeling word correlation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.