Abstract

This paper presents a proposal of a general framework that explicitly models local information and global information in a conditional random field. The proposed method extracts global image features as well as local ones and uses them to predict the scene of the input image. Scene-based top-down information is generated based on the predicted scene. It represents a global spatial configuration of labels and category compatibility over an image. Incorporation of the global information helps to resolve local ambiguities and achieves locally and globally consistent image recognition. In spite of the model's simplicity, the proposed method demonstrates good performance in image labeling of two datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.