Abstract

Image annotation plays an important role in bridging the semantic gap between low level features and high level semantic contents in image access. In this paper, such a task is tackled by annotating regions which are primitives of a visual scene. We propose a probabilistic model to characterize spatial context for region annotation. Such a model provides a unifying framework integrating both feature distribution models and spatial context models. A wide range of advanced modeling techniques can be utilized to further extend this framework. The approach is also potentially scalable to a large number of semantic concepts and a large number of images. Experimental results based on simple parametric models demonstrate promising results of our approach by investigating the impacts of neighbors, segmentation, and visual features

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call