Abstract
Current eye tracking technologies have a number of drawbacks when it comes to practical use in real-world settings. Common challenges, such as high levels of daylight, eyewear (e.g. spectacles or contact lenses) and eye make-up, give rise to noise that undermines their utility as a standard component for mobile computing, design, and evaluation. To work around these challenges, we introduce CrowdEyes, a mobile eye tracking solution that utilizes crowdsourcing for increased tracking accuracy and robustness. We present a pupil detection task design for crowd workers together with a study that demonstrates the high-level accuracy of crowdsourced pupil detection in comparison to state-of-the-art pupil detection algorithms. We further demonstrate the utility of our crowdsourced analysis pipeline in a fixation tagging task. In this paper, we validate the accuracy and robustness of harnessing the crowd as both an alternative and complement to automated pupil detection algorithms, and explore the associated costs and quality of our crowdsourcing approach.
Highlights
Eye tracking is a method of measuring an individual’s eye movement to identify both where a person is looking and the sequence in which the person’s eyes are shifting from one location to another
Since most of the state-of-the-art pupil detection algorithms are based on the edge filtering approach [7], they are very susceptible to failure under the aforementioned conditions
Using the Labeled Pupil in the Wild (LPW) dataset [36] we found the farthest distance between two consecutive frames to be under 15px (Figure 6-left), while under 30px between two consecutive multiscale structural similarity index (MSSSIM)-selected frames (Figure 6-right)
Summary
Eye tracking is a method of measuring an individual’s eye movement to identify both where a person is looking (gaze) and the sequence in which the person’s eyes are shifting from one location to another. Tonsen et al [36] evaluated the pupil detection success rate of five state-of-the-art pupil detection algorithms: Swirski [34], ExCuSe [5], Isophete [37], Gradient [35] and PupilLabs [12] using their large and challenging real-world Labeled Pupil in the Wild (LPW) dataset [36] of 130,856 eye video frames from 22 participants. Fuhl et al introduced a new pupil detection algorithm named ElSe [6] that outperforms other current state-of-the-art approaches (Swirski, ExCuSe, PupilLabs, Starburst [40] and Set [11]) in an evaluation study [7] that used a large-scale composite dataset of previously annotated images (from [36], [6], [34], and [5]). While ElSe slightly improves on the performance, it cannot yet robustly detect pupil positions in the presence of reflections, poor illumination conditions, or eye make-up (see [7] for detailed results)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.