The removal of drivers’ active engagement in driving tasks can lead to erratic gaze patterns in SAE Level 2 (L2) and Level 3 (L3) automation, which has been linked to their subsequential degraded take-over performance. To further address how changes in gaze patterns evolve during the take-over phase, and whether they are influenced by the take-over urgency and the location of the human-machine interface, this driving simulator study used a head-up display (HUD) to relay information about the automation status and conducted take-over driving experiments where the ego car was about to exit the highway with variations in the automation level (L2, L3) and time budget (2 s, 6 s). In L2 automation, drivers were required to monitor the environment, while in L3, they were engaged with a visual non-driving related task. Manual driving was also embodied in the experiments as the baseline. Results showed that, compared to manual driving, drivers in L2 automation focused more on the HUD and Far Road (roadway beyond 2 s time headway ahead), and less on the Near Road (roadway within 2 s time headway ahead); while in L3, drivers’ attention was predominantly allocated on the non-driving related task. After receiving take-over requests (TORs), there was a gradual diversion of attention from the Far Road to the Near Road in L2 take-overs. This trend changed nearly in proportion to the time within the time budget and it exaggerated given a shorter time budget of 2 s. While in L3, drivers’ gaze distribution was similar in the early stage of take-overs for both time budget conditions (2 s vs. 6 s), where they prioritized their early glances to Near Road with a gradual increase in attention towards Far Road. The HUD used in the present study showed the potential to maintain drivers’ attention around the road center during automation and to encourage drivers to glance the road earlier after TORs by reducing glances to the instrument cluster, which might be of significance to take-over safety. These findings were discussed based on an extended conceptual gaze control model, which advances our understanding of gaze patterns around control transitions and their underlying gaze control causations. Implications can be contributed to the design of autonomous vehicles to facilitate the transition of control by guiding drivers’ attention appropriately according to drivers’ attentional state and the take-over urgency.
Read full abstract