Eye-based communication systems, like the Blink-To-Speak language, are essential tools for individuals with motor neuron disorders, enabling them to articulate their needs and emotions. These systems leverage eye movements as a means of interaction, providing a vital communication channel for those with severe speech impairments. By interpreting specific eye gestures, users can effectively convey their thoughts, fostering a sense of independence and improving their quality of life. Many of the existing eye-tracking technologies tend to be complex and costly, limiting their accessibility in low-income regions. Using a mobile phone camera, the system captures real-time video frames to detect and track the user’s eyes, leveraging advanced facial landmark detection. Four primary eye movements—Left, Right, Up, and Blink—form the basis of the Blink-To-Live communication model, allowing users to communicate a range of more than 60 daily commands. By translating sequences of three eye gestures, the system generates readable sentences that are displayed on a screen and can be vocalized through synthesized speech. Unlike other sensor-dependent systems, Blink-To-Live is designed for affordability, flexibility, and ease of use, making it accessible without the need for specialized software or hardware. A prototype has been tested on a range of participants, showing positive results in terms of ease of use, flexibility, and cost-effectiveness
Read full abstract