Abstract

Video scene understanding is leading to an increased research investment in developing artificial intelligence technologies, pattern recognition, and computer vision, especially with the advance in sensor technologies. Developing autonomous unmanned vehicles, able to recognize not just targets appearing in a scene but a complete scene the targets are involved in (describing events, actions, situations, etc.) is becoming crucial in the recent advanced intelligent surveillance systems. At the same time, besides these consolidated technologies, the Semantic Web Technologies are also emerging, yielding seamless support to the high-level understanding of the scenes. To this purpose, the paper proposes a systematic ontology modeling to support and improve video content analysis, by generating a comprehensive high-level scene description, achieved by semantic reasoning and querying. The ontology schema comes from as an integration of new and existing ontologies and provides some design pattern guideline to get a high-level description of a whole scenario. It starts from the description of basic targets in the video scenario, thanks to the support of video tracking algorithms and target classification; then provides a higher level interpretation, compounding event-driven target interactions (for local activity comprehension), to reach gradually an abstraction high level that enables a concise and complete scenario description.

Highlights

  • Unmanned Aerial Vehicles (UAVs) are extensively used for research, monitoring and assistance in several fields of application ranging from defense, emergency and disaster management to agriculture, delivery of items, filming and so on

  • This paper introduces a multi-ontology process design pattern to support knowledge acquisition and reuse about a UAV-taken scenario

  • The TrackPOI:ThingObject instance is the high-level object that carries out the activity; since it represents the main participant of the activity, it is equivalent to the foaf:Agent class

Read more

Summary

INTRODUCTION

Unmanned Aerial Vehicles (UAVs) are extensively used for research, monitoring and assistance in several fields of application ranging from defense, emergency and disaster management to agriculture, delivery of items, filming and so on. Poor ontology integration can result in excessive redundancy of information, with a consequent reduction in performance [14], that inevitably affects semantic reasoning and query processing [22] To address this issue, this work proposes a novel and systematic ontology design to support Computer Vision methods in the video scene comprehension. The output of video tracking and target classification (and labeling) is encoded in ontological assertions to infer new enhanced knowledge that describe target interactions, events, activities and situations appearing on the scene. The multi-layer knowledge schema shown in Figure 1 provides a systematic design process to increasingly yield a scenario description of the video content, formally supported by ontology modeling.

RELATED WORK
OBJECT LAYER
SITUATION LAYER
A CLOSER LOOK AT THE INCREMENTAL ONTOLOGY MODELING: A SCENARIO EXAMPLE
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call