Abstract

Supporting visually impaired people during their navigation is a challenging task that involves localization, tracking, navigation, obstacle avoidance, and path guidance. Many researchers have experimented with Sonar, RFID, GPS, NFC, walking sticks, waist-based devices, and even computer vision modules for blind navigation. Using five sonar sensors for obstacle detection with direction and timestamp, we developed Sonar Glass. As humans see in left, right, front, top, and bottom directions based on eye angle and the head pose, the sonar glass is designed to provide obstacle information at the same angle. The head movement of the visually impaired person activates a pair of sensors for each module. Understanding human eye movement mechanisms and developing a synchronization protocol for each pair of visual sensors on sonar glass sitting on both eyes is the main goal of this paper. The major challenge lies in understanding and simulating the human vision mechanism and to realize how the field of Artificial Intelligence can be a contributor in producing technologies for the visually impaired. We are using log-polar transform to simulate human retinal image mapping. The Scale Invariant Feature Transform (SIFT) algorithm has also been implemented. It is the first time that both human eyes can be replaced by vision sensors. After comparing the estimated obstacle information from one sensor pair with that of the other sensors, the voice track is activated. The blind person uses the nearest obstacle information to avoid the obstacle and to extract spatial information about obstacles ahead of the user and provide an early warning. Unique in its design, the sonar Glass’s synchronization protocol for a pair of related sensors provides possible object information in that direction. The executions were simulated in MATLAB and the results obtained in real time are found to be promising as the tests were carried out both indoor and outdoor. The sonar Glass design is unique of its kind and the synchronization protocol for the pair of related sensors provides the possible vision about the object information at that particular direction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call