Abstract

For many years, braille-assistive technologies have aided blind individuals in reading, writing, learning, and communicating with sighted individuals. These technologies have been instrumental in promoting inclusivity and breaking down communication barriers in the lives of blind people. One of these technologies is the Optical Braille Recognition (OBR) system, which facilitates communication between sighted and blind individuals. However, current OBR systems have a gap in their ability to convert braille documents into multilingual texts, making it challenging for sighted individuals to learn braille for self-learning-based uses. To address this gap, we recommend a segmentation and deep learning-based approach named Fly-LeNet that converts braille images into multilingual texts. The approach includes image acquisition, preprocessing, and segmentation using the Mayfly optimization approach with a thresholding method and a braille multilingual mapping step. It uses a deep learning model, LeNet-5, that recognizes braille cells. We evaluated the performance of the Fly-LeNet through several experiments on two datasets of braille images. Dataset-1 consists of 1404 labeled samples of 27 braille signs demonstrating the alphabet letters, while Dataset-2 comprises 5420 labeled samples of 37 braille symbols representing alphabets, numbers, and punctuations, among which we used 2000 samples for cross-validation. The suggested model achieved a high classification accuracy of 99.77% and 99.80% on the test sets of the first and second datasets, respectively. The results demonstrate the potential of Fly-LeNet for multilingual braille transformation, enabling effective communication with sighted individuals.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call