Abstract

This work focuses on the search of a sample object (car) in video sequences and images based on shape similarity. We form a new description for cars, using relational graphs in order to annotate the images where the object of interest (OOI) is present. Query by text can be performed afterward to extract images of OOI from an automatically preprocessed database. The performance of the general retrieval systems is not satisfactory due to the gap between high level concepts and low level features. In this study we successfully fulfill this gap by using the graph-based description scheme which provides an efficient way to obtain high-level semantics from low-level features. We investigate the full potential of the shape matching method based on relational graph of objects with respect to its accuracy, efficiency, and scalability. We use hierarchical segmentation that increases the accuracy of the detection of the object in the transformed and occluded images. Many shape-based similarity retrieval methods perform well if the initial segmentation is adequate; however, in most cases segmentation without a priori information or user interference yields unsuccessful object extraction results. Compared to other methods, the major advantage of the proposed method is its ability to create semantic segments automatically from the combination of low level edge- or region-based segments using model-based segmentation. It is shown that a graph-based description of the complex objects with model-based segmentation is a powerful scheme for automatic annotation of images and videos.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.