Abstract
As the size of image databases for various applications keeps growing at an explosive rate, how to automatically and efficiently classify the images and mimic human visual perception is an extremely important issue. The spatial layout information of the image is a kind of abstract semantics and the visual grammar of the image. It could dramatically improve the effectiveness of the scene classification. We proposed a Bayesian framework based on visual grammar that aims to reduce the gap between low-level features and high-level semantics. This algorithm uses a semantic-based image segmentation algorithm to get the major regions and uses Bayesian framework to translate the region into object word and also build the object database. It builds a visual grammar by leveraging the spatial layout of the scene image. A visual vocabulary with grammar constraint for the scene classification is formed via a Bayesian learning model. The experiments show that our algorithm is simple and effective. The result of the classification meets the human visual perception well.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.