Abstract

Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C's Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers' observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound.

Highlights

  • The current evolution of ubiquitous computing and information networks is rapidly merging the physical and the digital worlds enabling the ideation and development of a new generation of intelligent applications as eHealth, Logistics, Intelligent Transportation, Environmental Monitoring, Smart Grids, Smart Metering or Home Automation

  • (Explanatory note: om is used as a namespace for Observations and Measurements language (O&M) and is placed, with a colon, before the concepts defined in the O&M schema; concepts from the environment ontology contain the namespace environment, and dbpedia represents a link from a location observation to a dbpedia URI)

  • To illustrate with an example, applications could have access to the human-generated observations stored as Resource Description Framework (RDF) Graphs, which can be retrieved via SPARQL queries

Read more

Summary

Introduction

The current evolution of ubiquitous computing and information networks is rapidly merging the physical and the digital worlds enabling the ideation and development of a new generation of intelligent applications as eHealth, Logistics, Intelligent Transportation, Environmental Monitoring, Smart Grids, Smart Metering or Home Automation. Context-aware HMI systems can be regarded as systems that use sensor data and user inputs to interact with applications, but at the same time HMIs may be regarded as sensing systems capable of producing real-world information for the Sensor Web. HMI systems for connected cars have to manage, different driver’s interaction modalities (speech–microphones and loudspeakers; vision–displays; haptic–knobs, buttons, touch screen; etc.), and local and remote sensor information. Architecture and Interfaces (MMI) [15] Following these principles we will discuss the design of in-vehicle context-aware multimodal HMI systems capable of collecting driver’s information reporting observations on different road, traffic or environmental situations, and generate semantic representations of them.

Related Work
HMI Systems to Collect Driver Observations
Publishing Driver Observations into the Semantic Sensor Web
Semantic Description of Driver-Generated Observations
Publishing on the Semantic Sensor Web
Experimental Setup
Performance Analysis and Concept Validation
Performance Analysis
Test Scenario Design and Validation
Conclusions and Further Research
Findings
40. Sesame
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call