Abstract
In this study, tactile coating surfaces of visually impaired individuals were detected using the deep learning method. For this detection, 4 of the You Only Look Once (YOLO) architectures, one of the best deep learning methods, were used. No ready data set was used in the study. A unique and new data set was prepared for the study. For the data set, 6278 images were taken from tactile coating surfaces. Images for real-time applications were obtained from many different environments. The tactile coating surfaces in the pictures were labelled separately. A total of 9184 tags were made. The dataset was implemented in YOLOv5, YOLOv6, YOLOv7, and YOLOv8 architectures. The highest accuracy was achieved in the YOLOv8 architecture with an accuracy rate of 97%, F1-Score of 0.940, and mAP@.5 of 0.977. The model was applied with k-fold cross-validation to evaluate performance measurements. In order for the study to be used in real-time, the frame per second (FPS) was increased to 150.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have