Abstract

Infants' looking behaviors are often used for measuring attention, real‐time processing, and learning—often using low‐resolution videos. Despite the ubiquity of gaze‐related methods in developmental science, current analysis techniques usually involve laborious post hoc coding, imprecise real‐time coding, or expensive eye trackers that may increase data loss and require a calibration phase. As an alternative, we propose using computer vision methods to perform automatic gaze estimation from low‐resolution videos. At the core of our approach is a neural network that classifies gaze directions in real time. We compared our method, called iCatcher, to manually annotated videos from a prior study in which infants looked at one of two pictures on a screen. We demonstrated that the accuracy of iCatcher approximates that of human annotators and that it replicates the prior study's results. Our method is publicly available as an open‐source repository at https://github.com/yoterel/iCatcher.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call