The growing demand for assistive technologies for individuals with physical disabilities has led to significant advancements in human-computer interaction (HCI). Eye gaze tracking, a promising input modality, offers a non-invasive and intuitive way to enhance accessibility and interaction. This paper presents iAlert, an innovative eye gaze-based alert system designed to provide timely assistance to individuals with limited mobility or communication abilities. By analyzing eye movements, iAlert aims to detect user intent and trigger appropriate responses, thereby facilitating improved interaction with the environment, enhancing safety, and offering real-time assistance in everyday tasks. This system holds great potential for improving the quality of life for individuals with physical impairments, elderly individuals, and others requiring assistive technologies. The proposed methodology integrates eye gaze tracking with machine learning algorithms to build an intelligent alert system. Eye gaze data is captured using specialized hardware such as infrared sensors or cameras, which track the position and movement of the eyes. These gaze patterns are then pre-processed to remove noise and identify key features, such as fixations, saccades, and gaze direction. Machine learning models, specifically Convolutional Neural Networks (CNNs), are employed to classify the gaze data and predict user intent, allowing for real-time decision-making. A Support Vector Machine (SVM) classifier is used for detecting specific gestures or commands, such as a blink or a prolonged gaze, which are mapped to particular actions (e.g., triggering an alert, controlling a device, or communicating a need). The system adapts to user-specific behaviors over time through continuous learning, ensuring personalized assistance. The expected results from the iAlert system are twofold: first, a high level of accuracy in detecting and responding to user gaze commands, and second, an enhanced user experience in terms of real-time responsiveness and adaptability. The system is expected to achieve over 90% accuracy in identifying user intent, especially in controlled environments. The integration of machine learning algorithms like CNNs and SVM is critical for ensuring robust classification of eye gaze data, reducing the error rate in real-time applications. Moreover, the machine learning approach allows the system to continuously improve through adaptive learning, providing more accurate and personalized responses over time. The importance of using these algorithms lies in their ability to handle complex, non-linear relationships within gaze data, enabling the system to function effectively across different users and contexts. By leveraging deep learning techniques, the iAlert system can scale for a wide range of assistive applications, making it a valuable tool for enhancing independence and accessibility for individuals with physical challenges
Read full abstract