Abstract

Predicting the driver’s gaze could be important information in preventing accidents while driving. In this study, machine learning models for estimating the driver’s gaze distraction through head movement data were created and their performance was compared and evaluated. Participants wore glasses-type eye trackers and performed the task of selecting the touch screen buttons while driving. The input variable used in the model was data obtained from a 3-axis accelerometer sensor and a 3-axis gyroscope sensor, and the target variable was eye-gaze data. As a result, it was confirmed that the gaze area could be estimated with a precision, sensitivity, specificity, and F1-score of 72.1%, 72.5%, 66.0%, and 69.3%, respectively, only with the head movement sensing data. The model trained using time-series datasets had higher performance than using non-time series datasets. This study presented one alternative that could be used to determine the driver’s status with an inexpensive sensor.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call