Abstract

Background and ObjectiveDaily activities such as shopping and navigating indoors are challenging problems for people with visual impairment. Researchers tried to find different solutions to help people with visual impairment navigate indoors and outdoors. MethodsWe applied deep learning to help visually impaired people navigate indoors using markers. We propose a system to help them detect markers and navigate indoors using an improved Tiny-YOLOv3 model. A dataset was created by collecting marker images from recorded videos and augmenting them using image processing techniques such as rotation transformation, brightness, and blur processing. After training and validating this model, the performance was tested on a testing dataset and on real videos. ResultsThe contributions of this paper are: (1) We developed a navigation system to help people with visual impairment navigate indoors using markers; (2) We implemented and tested a deep learning model to detect Aruco markers in different challenging situations using Tiny-YOLOv3; (3) We implemented and compared several modified versions of the original model to improve detection accuracy. The modified Tiny-YOLOv3 model achieved an accuracy of 99.31% in challenging conditions and the original model achieved an accuracy of 96.11 %. ConclusionThe training and testing results show that the improved Tiny-YOLOv3 models are superior to the original model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.