Abstract

Image classification models are becoming a popular method of analysis for scanpath classification. To implement these models, gaze data must first be reconfigured into a 2D image. However, this step gets relatively little attention in the literature as focus is mostly placed on model configuration. As standard model architectures have become more accessible to the wider eye-tracking community, we highlight the importance of carefully choosing feature representations within scanpath images as they may heavily affect classification accuracy. To illustrate this point, we create thirteen sets of scanpath designs incorporating different eye-tracking feature representations from data recorded during a task-based viewing experiment. We evaluate each scanpath design by passing the sets of images through a standard pre-trained deep learning model as well as a SVM image classifier. Results from our primary experiment show an average accuracy improvement of 25 percentage points between the best-performing set and one baseline set.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.