Abstract

In this paper, we propose a probabilistic framework for efficient retrieval and indexing of image collections. This framework uncovers the hierarchical structure underlying the collection from image features based on a hybrid model that combines both generative and discriminative learning. We adopt the generalized Dirichlet mixture and maximum likelihood for the generative learning in order to estimate accurately the statistical model of the data. Then, the resulting model is refined by a new discriminative likelihood that enhances the power of relevant features. Consequently, this new model is suitable for modeling high-dimensional data described by both semantic and low-level (visual) features. The semantic features are defined according to a known ontology while visual features represent the visual appearance such as color, shape, and texture. For validation purposes, we propose a new visual feature which has nice invariance properties to image transformations. Experiments on the Microsoft's collection (MSRCID) show clearly the merits of our approach in both retrieval and indexing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call