Abstract

Electrooculogram (EOG) is the measurement of the biopotential generated by eye movement. These signals are crucial for people with severe motor disabilities because they rarely suffer alterations in eye movement. Therefore, the correct classification of these signals could find application in the design of simple user interfaces that allow independence and communication skills. This paper presents a comparison of the main classification techniques in the literature for the control of EOG-based human–computer interfaces (HCIs). Static threshold, K-nearest neighbor (KNN), artificial neural network (ANN), and support vector machine (SVM) techniques, together with two new ensembles of classifiers. One is based on a voting scheme while the other employs two stages to encode the outcomes from the KNN, SVM, and ANN classifiers. All classifiers were compared based on four parameters – precision, specificity, sensitivity, and accuracy – to select the most appropriate approach in real-time. This work also provides a novel data set consisting of signals from nine healthy participants and compares the above methods also on another public data set. Machine learning-based models proved to be more robust for continuous use of an EOG-based HCI, while static thresholds are better for specific and repetitive actions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call