Abstract

The purpose of this study was to assess the ability of blind individuals to reach and grasp for objects, under the guidance of auditory (verbal) or vibrotactile cues controlled by real time computer vision algorithms. For these experiments, we created the Object Localization and Tracking System (OLTS). The OLTS utilized a head mounted wide-angle (diagonal 92° degrees) monocular camera, a central processing unit and two types of physical feedback: auditory bone conduction headphones or cranially positioned vibration motors. A computer vision algorithm, the Context Tracker, processed live video to track objects in front of the visually impaired subject. Physical feedback was then generated based on the object position. The feedback guided the user to move the camera until the desired object was in the central region of the camera, defined by an angle of the camera field of view. The central region was varied between 3.9 and 39.6 degrees. Experiments consisted of localizing and grasping for an object based on feedback provided. On average, subjects were able to locate the correct object within 20 seconds. For auditory feedback, using a central angle of 7.8° led to poor performance compared to the other angles. Performance using vibrotactile feedback worsened when using a central angle of 3.9°. No consistent performance trends were evident based on age of blindness onset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call