Abstract
In this paper, we present a multi-modal approach for driver fatigue and distraction detection. Based on a driving simulator platform equipped with several sensors, we have designed a framework to acquire sensor data, process and extract features related to fatigue and distraction. Ultimately the features from the different sources are fused to infer the driver’s state of inattention. In our work, we extract audio, color video, depth map, heart rate, and steering wheel and pedals positions. We then process the signals according to three modules, namely the vision module, audio module, and other signals module. The modules are independent from each other and can be enabled or disabled at any time. Each module extracts relevant features and, based on hidden Markov models, produces its own estimation of driver fatigue and distraction. Lastly, fusion is done using the output of each module, contextual information, and a Bayesian network. A dedicated Bayesian network was designed for both fatigue and distraction. The complementary information extracted from all the mod- ules allows a reliable estimation of driver inattention. Our experimental results show that we are able to detect fatigue with 98.4 % accuracy and distraction with 90.5 %.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have