Navigating through a tactile paved footpath surrounded by various sizes of static and dynamic obstacles is one of the biggest impediments visually impaired people face, especially in Dhaka, Bangladesh. This problem is important to address, considering the number of accidents in such densely populated footpaths. We propose a novel deep-edge solution using Computer Vision to make people aware of the obstacles in the vicinity and reduce the necessity of a walking cane. This study introduces a diverse novel tactile footpath dataset of Dhaka covering different city areas. Additionally, existing state-of-the-art deep neural networks for object detection have been fine-tuned and investigated using this dataset. A heuristic-based breadth-first navigation algorithm (HBFN) is developed to provide navigation directions that are safe and obstacle-free, which is then deployed in a smartphone application that automatically captures images of the footpath ahead to provide real-time navigation guidance delivered by speech. The findings from this study demonstrate the effectiveness of the object detection model, YOLOv8s, which outperformed other benchmark models on this dataset, achieving a high mAP of 0.974 and an F1 score of 0.934. The model’s performance is analyzed after quantization, reducing its size by 49.53% while retaining 98.97% of the original mAP.
Read full abstract