Abstract

Recent advances in head-mounted eye-tracking technology have allowed researchers to monitor eye movements during locomotion in real-world environments, increasing the ecological validity of research on human gaze behavior. While collecting eye-tracking data is becoming more accessible, visual analytics of eye-tracking data remains difficult and time-consuming. As such, there is a significant need for developing efficient visualization and analysis tools for large-scale eye-tracking data. This work develops a first-of-its-kind eye-tracking data visualization and analysis system that allows for automatic recognition of independent objects within field-of-vision, using deep-learning-based semantic segmentation. This system recolors the fixated objects-of-interest by integrating gaze fixation information with semantic maps. The system effectively allows researchers to automatically infer what objects users view and for how long in dynamic contexts. The contributions are 1) a data visualization and analysis system that uses deep-learning technology along with eye-tracking data to automatically recognize objects-of-interest from head-mounted eye-tracking video recordings, and 2) a graphical user interface that presents objects-of-interest annotation along with eye-tracking data information. The architecture is tested with an outdoor case study of users walking around the Tufts University campus as part of a navigation study, which was administered by a team of research psychologists.

Highlights

  • Head-mounted eye-trackers are lightweight and unobtrusive, which enables the recording of eye movements without restricting movement in more naturalistic experimental settings [12]–[14]

  • The presented ISeeColor software architecture tackles these three primary questions (Q1-Q3) for eye-tracking data visualization and analysis. It integrates gaze direction information from the hardware, enables automatic recognition for fixated objects in areas-of-interest using image semantic segmentation, and facilitates data visualization using fast-speed image recoloring based on the fixation duration

  • The image semantic segmentation algorithm will generate a group of segments, {S1, S2, . . . ,Sn}, where n is the number of possible OOI categories

Read more

Summary

INTRODUCTION

Numerous research areas and commercial products utilize head-mounted eye-tracking devices, such as education [1], cognitive psychology [2], usability marketing [3], [4], onroad driving applications [5], medical applications [6], [7], information visualization research [8]–[10], eye-control accessibility, and assistive technology [10], [11]. The presented ISeeColor software architecture tackles these three primary questions (Q1-Q3) for eye-tracking data visualization and analysis It integrates gaze direction information (where) from the hardware, enables automatic recognition for fixated objects in areas-of-interest (objectsof-interest) using image semantic segmentation (what), and facilitates data visualization using fast-speed image recoloring based on the fixation duration (how long). An additional consideration in visualizing eye-tracking data is defining the areas-of-interest (AOI), which are objects within the visual scene that are of particular interest to researchers for analysis. Kurzhals et al created a timeline visualization to show AOI-based scanpaths of different viewers based on manual annotation of AOIs. Kurzhals et al [22] described an AOI annotation process using automatic clustering of eye-tracking data integrated into an interactive labeling and analysis system (see Figure 3). EncNet network produced an mIOU score of 85.9 in the 2012 PASCAL VOC challenge

OOI COLOR TRANSFORM USING ALPHA-BLENDING
CONCLUSION AND FUTURE WORK
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call