Abstract

Revolution in technology is changing the way visually impaired people read and write Braille easily. Learning Braille in its native language can be more convenient for its users. This study proposes an improved backend processing algorithm for an earlier developed touchscreen-based Braille text entry application. This application is used to collect Urdu Braille data, which is then converted to Urdu text. Braille to text conversion has been done on Hindi, Arabic, Bangla, Chinese, English, and other languages. For this study, Urdu Braille Grade 1 data were collected with multiclass (39 characters of Urdu represented by class 1, Alif (ﺍ), to class 39, Bri Yay (ے). Total (N = 144) cases for each class were collected. The dataset was collected from visually impaired students from The National Special Education School. Visually impaired users entered the Urdu Braille alphabets using touchscreen devices. The final dataset contained (N = 5638) cases. Reconstruction Independent Component Analysis (RICA)-based feature extraction model is created for Braille to Urdu text classification. The multiclass was categorized into three groups (13 each), i.e., category-1 (1–13), Alif-Zaal (ﺫ - ﺍ), category-2 (14–26), Ray-Fay (ﻒ - ﺮ), and category-3 (27–39), Kaaf-Bri Yay (ے - ﻕ), to give better vision and understanding. The performance was evaluated in terms of true positive rate, true negative rate, positive predictive value, negative predictive value, false positive rate, total accuracy, and area under the receiver operating curve. Among all the classifiers, support vector machine has achieved the highest performance with a 99.73% accuracy. For comparisons, robust machine learning techniques, such as support vector machine, decision tree, and K-nearest neighbors were used. Currently, this work has been done on only Grade 1 Urdu Braille. In the future, we plan to enhance this work using Grade 2 Urdu Braille with text and speech feedback on touchscreen-based android phones.

Highlights

  • Smart devices are the most powerful tool for improving people’s living standards with visual disabilities [1]

  • true positive rate (TPR), true negative rate (TNR), positive predicted value (PPV), negative predicted value (NPV), total accuracy (TA), false positive rate (FPR), and area under the curve (AUC) were the performance metrics employed in the evaluation

  • Better results were seen with support vector machine (SVM), sequential model, and GoogLeNet inception model. e maximum performance with the lowest error rate was achieved by SVM with Reconstruction independent component analysis (RICA)-based feature extraction method with TPR (93.96%), TNR (99.85%), PPV (94.51%), NPV (99.87%), TA (99.73%), and FPR (0.14%)

Read more

Summary

Introduction

Smart devices are the most powerful tool for improving people’s living standards with visual disabilities [1]. People learn by watching videos, tutorials, and online courses on their smart devices [9]. Braille is a commonly used language for visually impaired people. Braille is comprised of six dots in the form of two columns and three rows [10]. Impaired people write on sheets with the help of a stylus and read by gliding their fingers over the raised dots. It is difficult for visually impaired people to write Braille using these devices.

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.