Abstract

Toward the age of smart human–computer interactions, human activity recognition becomes one of the most popular areas for promoting ambient intelligence techniques. To this end, this study aimed at selecting critical eye-movement features to build artificial intelligence models for recognizing three common user activities in front of the computer using an eye tracker. One hundred fifty students participated in this study to perform three everyday computer activities, comprising reading English journal articles, typing English sentences, and watching an English trailer video. While doing these tasks, their eye-movements were recorded using a desktop eye tracker (GP3 HD Gazepoint™ Canada). The collected data were then processed as 19 eye-movement features. Before building convolutional neural network (CNN) models for recognizing the three computer activities, three feature selection methods, comprising analysis of variance (ANOVA), extra tree classification (ETC), and mutual information (MI), were used to screen critical features. For each feature selection method, the top five and top 11 selected features were then used to build six types of CNN models. For comparison, the seventh type of CNN models were developed using all the 19 features as well. The comparison of the seven types of models showed that the models that used the top 11 features screened by the ANOVA method were superior to others, with an accuracy rate of 93.15% on average. This study demonstrates the application of feature selection methods, and an alternative means to recognize user activities in front of the computer.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.