Abstract

Current solutions are still far from reaching the ultimate goal, namely to enable users to retrieve the desired video clip among massive amounts of visual data in a semantically meaningful manner. With this study we propose a video database model (OVDAM) that provides automatic object, event and concept extraction. By using training sets and expert opinions, low-level feature values for objects and relations between objects are determined. N-Cut image segmentation algorithm is used to determine segments in video keyframes and the genetic algorithm-based classifier is used to make classification of segments (candidate objects) to objects. At the top level ontology of objects, events and concepts are used. Objects and/or events use all these information to generate events and concepts. The system has a reliable video data model, which gives the user the ability to make ontology-supported fuzzy querying. RDF is used to represent metadata. OWL is used to represent ontology and RDQL is used for querying. Queries containing objects, events, spatio-temporal clauses, concepts and low-level features are handled.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.