Abstract

With the exponential growth of smartphone usage and its computational capability, there is an opportunity today to build a usable navigation system for the visually impaired. A smartphone contains many sensors for sensing the surrounding environment such as GPS, cameras, and inertial sensors. However, there are many challenges for building a navigation system, such as low-level methods of environment sensing, accuracy, and efficient data processing. In this paper, we address some of these challenges and present a system for traffic light detection, which is fundamental for pedestrian navigation by the visually impaired in outdoors. In this system, we analyze the video feed from a smartphone's camera using model-based computer vision techniques to detect traffic lights. Specifically, we utilize both color and shape information as they are the most prominent features of the traffic lights. Additionally, we use the inertial sensors of a smartphone to compute the 3D orientation of a smartphone to predict a segment of a video frame, which is highly probable to contain the traffic lights. By processing only that segment, we improve the computational time by an order of magnitude on average. We evaluated this system in various lighting conditions such as cloudy, sunny, and at night, and achieved over 96% accuracy in the traffic light detection and recognition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.