AbstractCapacitive touch screens (CTS's) are essential components in most of today's digital devices. However, for the visually impaired (VI) users due to the uneven topography of the tactile surface, CTS's are more challenging to implement and thus this field remains largely underdeveloped. Considering the limited space around the microactuators driving the typical Braille dots for a tactile screen with ten dots‐per‐inch (dpi) resolution, the materials used for CTS should be flexible and durable with high mechanical strength. In this work, a flexible CTS based on polyimide (PI) and silver nanowires (AgNWs) as electrodes with a total thickness of 210 µm is developed. The dimensions of the AgNWs are on average 7.9 ± 2.4 µm in length and 85 ± 24 nm in width. The AgNWs electrodes showed low resistance and good adhesion to the PI substrate. A gesture recognition application is collected from the capacitive data to classify different gestures (including single‐ and double‐click, swipe‐left and ‐right, scroll‐up and ‐down as well as zoom‐in and ‐out) with two different approaches; machine learning and deep learning are implemented. The best performance is obtained using the YOLO model with a high validation accuracy of 97.76%. Finally, a software application is developed with the proposed hand gestures in real‐time to foster interaction of VI users with the tactile display allowing them to navigate a Windows file system and interact with the documents via hand gestures in a similar manner as sighted users on a conventional touch display will be able to do. This work paves the way to utilize CTS for the tactile displays in the market developed for VI users.
Read full abstract