Abstract

Smart devices are effective in helping people with impairments, overcome their disabilities, and improve their living standards. Braille is a popular method used for communication by visually impaired people. Touch screen smart devices can be used to take Braille input and instantaneously convert it into a natural language. Most of these schemes require location-specific input that is difficult for visually impaired users. In this study, a position-free accessible touchscreen-based Braille input algorithm is designed and implemented for visually impaired people. It aims to place the least burden on the user, who is only required to tap those dots that are needed for a specific character. The user has input English Braille Grade 1 data (a–z) using a newly designed application. A total dataset comprised of 1258 images was collected. The classification was performed using deep learning techniques, out of which 70%–30% was used for training and validation purposes. The proposed method was thoroughly evaluated on a dataset collected from visually impaired people using Deep Learning (DL) techniques. The results obtained from deep learning techniques are compared with classical machine learning techniques like Naïve Bayes (NB), Decision Trees (DT), SVM, and KNN. We divided the multi-class into two categories, i.e., Category-A (a–m) and Category-B (n–z). The performance was evaluated using Sensitivity, Specificity, Positive Predicted Value (PPV), Negative Predicted Value (NPV), False Positive Rate (FPV), Total Accuracy (TA), and Area under the Curve (AUC). GoogLeNet Model, followed by the Sequential model, SVM, DT, KNN, and NB achieved the highest performance. The results prove that the proposed Braille input method for touch screen devices is more effective and that the deep learning method can predict the user's input with high accuracy.

Highlights

  • The term "visually impaired" is used for people with no vision or non-recoverable low vision

  • This study focuses on the design, implementation, and evaluation of a new touchscreen-based Braille input method

  • GoogLeNet shows better performance than the rest of the techniques, all in terms of True Positive Rate (TPR), True Negative Rate (TNR), Positive Predicted Value (PPV), Negative Predicted Value (NPV), Total Accuracy (TA), False Positive Rate (FPR), False Negative Rate (FNR), and with reduced False Discovery Rate (FDR) 3.39%

Read more

Summary

Introduction

The term "visually impaired" is used for people with no vision or non-recoverable low vision. Braille input mechanism Several input methods have been designed for entering Braille using touch screens, e.g., Type In Braille [20], Edge Braille [21], VBraille [22], Perkinput [23], and Braille Easy [9]. The user enters the data by tapping on the screen, but tapping with both fingers simultaneously for entering two consecutive dots is not viable for the visually impaired. In Braille Touch, visually impaired people have to use both hands and multiple fingers to enter characters using a fixed position [23]. A comparative study conducted by Subash specifies four large buttons on the screen and tapping gestures such as single tap, double-tap, and triple tap are used to enter Braille dots [28]. Braille Easy uses single, double, and triple taps to enter dots, but memorizing the reference points is difficult [9]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call