Abstract

For the task of image annotation, traditional probabilistic topic models based on Latent Dirichlet Allocation (LDA) [1], assume that an image is a mixture of latent topics. An inevitable limitation of LDA is the inability to model topic correlation since topic proportions of an image are generated independently. Motivated by Correlated Topic Model (CTM) [2] which derives from natural language processing to model topic correlation of a document, we extend the popular LDA based models (corrLDA [3], sLDA-bin [4], trmmLDA [5]) to CTM based models (corrCTM, sCTM-bin, trmmCTM). We present a comprehensive comparison between CTM based and LDA based models on three benchmark datasets, illustrating the superior annotation performance of proposed CTM based models, by means of propagating topic correlation among image features and annotation words.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call