Abstract

The last few years have seen substantial progress in the field of smart objects (SOs): their number, diversity, performance and pervasiveness have all been quickly increasing and this evolution is expected to continue. To the best of our knowledge, little work has been made to leverage this abundance of resources to develop assistive devices for Visually Impaired People (VIP). However, we believe that SOs can both enhance traditional assistive functions (i.e. obstacle detection, navigation) and offer new ways of interacting with the environment. After describing spatial and non-spatial perceptive functions enabled by SOs, this article presents the SO2SEES, a system designed to be an interface between its user and neighboring SOs. The SO2SEES allows VIP to query surrounding SOs in an intuitive manner, relying on knowledge bases distributed on Internet of Things (IoT) cloud platforms and the SO2SEES's own back-end. To evaluate and validate the exposed concepts, we have developed a simple working implementation of the SO2SEES system using semantic web standards. A controlled-environment test scenario has been built around this early SO2SEES system to demonstrate its feasibility. As future works, we plan to conduct field experiments of this first prototype with VIP end users.

Highlights

  • Perception of the environment is essential for human daily life activities and vision appears to be the best sensory channel for acquisition of spatial information because it provides relatively simultaneous perception of large spatial fields [1]

  • Whereas a lot of research in the area of assistive devices for visually impaired people (VIP) has been focused on improving sensory ability of systems, our research aims at integrating assistive systems into the developing framework of the Internet of Things (IoT)

  • We have designed a new system, called SO2SEES, that allows VIP to get information from distributed knowledge bases belonging to various actors of the Internet of Things ecosystem

Read more

Summary

INTRODUCTION

Perception of the environment is essential for human daily life activities and vision appears to be the best sensory channel for acquisition of spatial information because it provides relatively simultaneous perception of large spatial fields [1]. A difficulty with that approach is to keep a balance between the expressiveness of the system (i.e. the expressiveness of the union of all queries), the ease of access of queries for the VIP (e.g. selecting a query in a list offering hundreds of choices is impractical), and the upstream work to define queries (as there is an infinity of possible queries) To solve this non-trivial problem, we mainly rely on the context of information on SO neighborhood: queries are bound to specific objects or specific properties of objects, and the system only proposes requests when the particular devices are near the person – that is, generally, when they are likely to be requested. Many other scenarios could have been evaluated, but their presentation was beyond the scope of this paper

DISCUSSION
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.