Abstract

Advanced Driving Assistance Systems (ADAS) has been a critical component in vehicles and vital to the safety of vehicle drivers and public road transportation systems. In this paper, we present a deep learning technique that classifies drivers' distraction behaviour using three contextual awareness parameters: speed, manoeuver and event type. Using a video coding taxonomy, we study drivers' distractions based on events information from Regions of Interest (RoI) such as hand gestures, facial orientation and eye gaze estimation. Furthermore, a novel probabilistic (Bayesian) model based on the Long short-term memory (LSTM) network is developed for classifying driver's distraction severity. This paper also proposes the use of frame-based contextual data from the multi-view TeleFOT naturalistic driving study (NDS) data monitoring to classify the severity of driver distractions. Our proposed methodology entails recurrent deep neural network layers trained to predict driver distraction severity from time series data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call