Abstract

In this paper we propose a real-time assistive localization approach to help blind and visually impaired people in navigating an indoor environment. The system consists of a mobile vision front end with a portable panoramic lens mounted on a smart phone, and a remote image feature-based database of the scene on a GPU-enabled server. Compact and e ective omnidirectional image features are extracted and represented in the smart phone front end, and then transmitted to the server in the cloud. These features of a short video clip are used to search the database of the indoor environment via image-based indexing to find the location of the current view within the database, which is associated with floor plans of the environment. A median-filter-based multi-frame aggregation strategy is used for single path modeling, and a 2D multi-frame aggregation strategy based on the candidates’ distribution densities is used for multi-path environmental modeling to provide a final location estimation. To deal with the high computational cost in searching a large database for a realistic navigation application, data parallelism and task parallelism properties are identified in the database indexing process, and computation is accelerated by using multi-core CPUs and GPUs. User-friendly HCI particularly for the visually impaired is designed and implemented on an iPhone, which also supports system configurations and scene modeling for new environments. Experiments on a database of an eight-floor building are carried out to demonstrate the capacity of the proposed system, with real-time response (14 fps) and robust localization results.

Highlights

  • An indoor localization system is of significant importance to the visually impaired in their daily lives by helping them localize themselves and further navigate unfamiliar indoor environments

  • In this paper we propose a real-time assistive localization approach to help blind and visually impaired people in navigating an indoor environment

  • Compact and effective omnidirectional image features are extracted and represented in the smart phone front end, and transmitted to the server in the cloud. These features of a short video clip are used to search the database of the indoor environment via image-based indexing to find the location of the current view within the database, which is associated with floor plans of the environment

Read more

Summary

Introduction

An indoor localization system is of significant importance to the visually impaired in their daily lives by helping them localize themselves and further navigate unfamiliar indoor environments. No extra large amount of sensors or infrastructure (such as beacons, or RFID tags) need to be installed in the environment, and no other complex devices are required except a daily-used smart phone (such as an iPhone, which many blind people use daily) as well as a low-cost compact lens (usually such a lens is less than 100 dollars, including a smart phone case). Another existing solution proposed by Legge et al [11] uses a handheld sign reader and widely distributed digitally encoded signs to give location information to the visually impaired. Some sample omnidirectional images are shown around the map, and their geo-locations are indicated in the floor plan

AIMS Electronics and Electrical Engineering
Related Work
System Overview
Preprocessing and Omni-Feature Extraction
Result
Localization by Indexing
Rotation-invariant feature extraction and rotation estimation
Multi-frame parallel indexing and median filtering
Comparison of single-versus-multiple frame indexing
Impact of tilt angles of the camera
Localization in wide areas
Time performance experiment
Findings
Conclusion and Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call