Driving behavior analysis is vital for the advanced driving assistance system, aiming to improve driving behavior and decrease traffic accidents. Most existing driving behavior learning methods focus on either vehicle sensor information or driver's attention information, and provide a classification result on the current time data samples. The visualization of driving behavior on time series data samples could give an understanding and review of the driver's continuous actions. However, there has been little progress in combining the multi-modal vehicle and driver information on driving behavior learning and visualization. A multi-information driving behavior learning and visualization method with natural gaze prediction is proposed in this paper, which automatically integrates driver's gaze direction estimated from face camera, and various vehicle sensor data collected from on-board diagnostics (OBD) system. To accurately estimate the eye gaze under large head movement, a novel head pose-free eye gaze prediction method without calibration is proposed based on global and local scale sparse encoding, which treats the direction mapping as small gaze region classification. To understand driving behavior more intuitively, the latent features that represent different driving behaviors are extracted by FastICA from the fused time series data, and mapped into RGB color space for distinguished visualization. Experimental results demonstrate the effectiveness of the proposed method, and show that the proposed method performs better than the compared methods.
Read full abstract