To investigate drivers’ gaze behavior and the characteristics of their gaze positions while driving, a natural driving behavior test method was employed alongside a non-contact eye-tracking device to conduct an in-vehicle experiment for collecting gaze data. Initially, we utilized the traditional approach to delineate the area of interest, analyzing variations in pupil diameter, gaze positions, and the duration spent in each area throughout the driving task, thereby compiling statistics on drivers’ gaze patterns. Subsequently, harnessing the You Only Look Once version 5 architecture, we can precisely identify the position of vehicles and obstacles from the captured images. Enhancements to the network model—including streamlining and integrating an attention mechanism—have significantly refined target detection accuracy. In the final analysis, by correlating drivers’ gaze data with the positional information of upcoming obstacles, we can accurately assess where drivers are looking. This fusion of data allows for a more nuanced observation of gaze dispersion and position within a one-second timeframe, providing valuable insights into drivers’ attention distribution and driving behaviors.
Read full abstract