Abstract

Blind people often need to identify objects around them, from packages of food to items of clothing. Automatic object recognition continues to provide limited assistance in such tasks because models tend to be trained on images taken by sighted people with different background clutter, scale, viewpoints, occlusion, and image quality than in photos taken by blind users. We explore personal object recognizers, where visually impaired people train a mobile application with a few snapshots of objects of interest and provide custom labels. We adopt transfer learning with a deep learning system for user-defined multi-label k-instance classification. Experiments with blind participants demonstrate the feasibility of our approach, which reaches accuracies over 90% for some participants. We analyze user data and feedback to explore effects of sample size, photo-quality variance, and object shape; and contrast models trained on photos by blind participants to those by sighted participants and generic recognizers.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.