Abstract
Indoor wayfinding is a major challenge for people with visual impairments, who are often unable to see visual cues such as informational signs, land-marks and structural features that people with normal vision rely on for wayfinding. We describe a novel indoor localization approach to facilitate wayfinding that uses a smartphone to combine computer vision and a dead reckoning technique known as visual-inertial odometry (VIO). The approach uses sign recognition to estimate the user's location on the map whenever a known sign is recognized, and VIO to track the user's movements when no sign is visible. The ad-vantages of our approach are (a) that it runs on a standard smartphone and re-quires no new physical infrastructure, just a digital 2D map of the indoor environment that includes the locations of signs in it; and (b) it allows the user to walk freely without having to actively search for signs with the smartphone (which is challenging for people with severe visual impairments). We report a formative study with four blind users demonstrating the feasibility of the approach and suggesting areas for future improvement.
Highlights
The key to any wayfinding aid is localization – a means of estimating and tracking a person’s location as they travel in an environment
Computer vision is a promising localization approach, but most past work in this area has either required special hardware [5] or the use of detailed 3D models of the environment [6] that are time-consuming to generate and make the approach vulnerable to superficial environmental changes. To overcome this limitation in past work we developed an indoor localization system [7] that combines step counting with computer vision-based sign recognition
We have implemented a prototype system on the iPhone 8 smartphone, using OpenCV3 to perform sign recognition and pose estimation, and Apple’s ARKit4 iOS software to perform visual-inertial odometry (VIO). (ARKit is compatible with the iPhone 6s and newer iPhone models; a similar tool, ARCore, is available to perform VIO on newer Android devices.) Currently the system runs as a logging app that captures video in real time, and saves the video and VIO data to the smartphone’s memory, which is analyzed on a computer offline
Summary
The key to any wayfinding aid is localization – a means of estimating and tracking a person’s location as they travel in an environment. There are a range of indoor localization approaches, including Bluetooth beacons [1], Wi-Fi triangulation, infrared light beacons [2] and RFIDs [3] All of these approaches incur the cost of installing and maintaining physical infrastructure, or of updating the system as the existing infrastructure changes (e.g., whenever Wi-Fi access points change). Computer vision is a promising localization approach, but most past work in this area has either required special hardware [5] or the use of detailed 3D models of the environment [6] that are time-consuming to generate and make the approach vulnerable to superficial environmental changes (e.g., new carpeting, moved tables and chairs). In our new approach we replaced step counting with visual-inertial odometry (VIO) [8], which functions well even if the user is walking with an irregular gait
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have