Abstract

Indicators based on visual time-sharing have been used to investigate drivers’ visual behaviour during additional task execution. However, visual time-sharing analyses have been restricted to additional tasks with well-defined temporal start and end points and a dedicated visual target area. We introduce a method to automatically extract visual time-sharing sequences directly from eye tracking data. This facilitates investigations of systems, providing continuous information without well-defined start and end points. Furthermore, it becomes possible to investigate time-sharing behavior with other types of glance targets such as the mirrors. Time-sharing sequences are here extracted based on between-glance durations. If glances to a particular target are separated by less than a time-based threshold value, we assume that they belong to the same information intake event. Our results indicate that a 4-s threshold is appropriate. Examples derived from 12 drivers (about 100 hours of eye tracking data), collected in an on-road investigation of an in-vehicle information system, are provided to illustrate sequence-based analyses. This includes the possibility to investigate human-machine interface designs based on the number of glances in the extracted sequences, and to increase the legibility of transition matrices by deriving them from time-sharing sequences instead of single glances. More object-oriented glance behavior analyses, based on additional sensor and information fusion, are identified as the next future step. This would enable automated extraction of time-sharing sequences not only for targets fixed in the vehicle’s coordinate system, but also for environmental and traffic targets that move independently of the driver’s vehicle.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call