Abstract

Research on driver sleepiness, often aimed towards devising driver sleepiness detection systems, involves the computation of driver sleepiness indicators. Two examples of such, which are often studied in the literature, are the standard deviation of the vehicle's lateral position and the driver's averaged blink duration. How good such measures actually indicate driver sleepiness may depend on the length of the time series from which the indicators are computed. However, the question of optimal interval length when studying driver sleepiness indicators seems to be largely ignored in the literature. Instead, the specific interval length used in most papers appears rather arbitrarily chosen, or much influenced by the design of the study. Interval lengths of five minutes are often used, but much shorter intervals (e.g. 60 s, 30 s or even shorter) as well as longer intervals (e.g. 10 or 30 minutes) have been used in some studies. The present work aims to improve the situation by analyzing the performance of six indicators of driver sleepiness as a function of interval length. The findings have implications on driver sleepiness research, especially research aimed at devising a system for driver sleepiness detection. The results indicate that interval lengths of 60 s or more generally give better results than shorter intervals (10–30 s) when computing driver sleepiness indicators, but also that the difference between 60 s and even longer intervals (120–900 s) seems small.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call