Abstract

The development of a novel depth-data based real-time obstacle detection and avoidance application for visually impaired (VI) individuals to assist them in navigating independently in indoors environments is presented in this paper. The application utilizes a mainstream, computationally efficient mobile device as the development platform in order to create a solution which not only is aesthetically appealing, cost-effective, lightweight and portable but also provides real-time performance and freedom from network connectivity constraints. To alleviate usability problems, a user-centered design approach has been adopted wherein semi-structured interviews with VI individuals in the local context were conducted to understand their micro-navigation practices, challenges and needs. The invaluable insights gained from these interviews have not only informed the design of our system but would also benefit other researchers developing similar applications. The resulting system design along with a detailed description of its obstacle detection and unique multimodal feedback generation modules has been provided. We plan to iteratively develop and test the initial prototype of the system with the end users to resolve any usability issues and better adapt it to their needs.

Highlights

  • According to the World Health Organization (WHO), 285 million people are estimated to be visually impaired (VI) worldwide, of which 39 million are blind and 246 million have low vision [1]; these numbers are projected to steadily increase in the coming years [2, 3].Navigating independently in an indoors environment is a major challenge for VI individuals—the inability to do so causes them frustration, undermines their confidence and autonomy and poses a serious risk to their physical safety [4,5,6,7]

  • Participant recruitment and interview sessions Semi-structured interviews were scheduled with ten VI participants who were recruited via the Disability Center located in the College of Education, Girls’ Campus, King Saud University (KSU), Riyadh, Saudi Arabia

  • The low vision (LV) participants reported varying levels of residual visual perception: LV2 had no vision in one eye while the other eye was normal; LV5 could not decipher text such as room numbers or building names; LV1, LV3, LV4 and LV6 could not discern details such as facial expressions; LV3 mentioned not being able to see in environments where the intensity of light is very high or very low

Read more

Summary

Introduction

According to the World Health Organization (WHO), 285 million people are estimated to be visually impaired (VI) worldwide, of which 39 million are blind and 246 million have low vision [1]; these numbers are projected to steadily increase in the coming years [2, 3].Navigating independently in an indoors environment is a major challenge for VI individuals—the inability to do so causes them frustration, undermines their confidence and autonomy and poses a serious risk to their physical safety [4,5,6,7]. According to the World Health Organization (WHO), 285 million people are estimated to be visually impaired (VI) worldwide, of which 39 million are blind and 246 million have low vision [1]; these numbers are projected to steadily increase in the coming years [2, 3]. Guide dogs and human companions are typically utilized to assist in detecting and avoiding obstacles. A sighted companion may give extraneous, ambiguous or incorrect directions because of a lack of knowledge and understanding of how VI people navigate [15] and it may not be possible or desirable for such a human aide to be available at all times (according to a recent report, 26% of blind adults in the United States live alone [16, 17])

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.