Abstract

Driver distraction is one of the leading causes of fatal car accidents in the U.S. Analyzing driver behavior using machine learning and deep learning models is an emerging solution to detect abnormal behavior and alarm the driver. Models with memory such as LSTM networks outperform memoryless models in car safety applications since driving is a continuous task and considering information in the sequence of driving data can increase the model’s performance. In this work, we used time-sequenced driving data that we collected in eight driving contexts to measure the driver distraction. Our model is also capable of detecting the type of behavior that caused distraction. We used the driver interaction with the car infotainment system as the distracting activity. A multilayer neural network (MLP) was used as the baseline and two types of LSTM networks including the LSTM model with attention network and the encoder–decoder model with attention were built and trained to analyze the effect of memory and attention on the computational expense and performance of the model. We compare the performance of these two complex networks to that of the MLP in estimating driver behavior. We show that our encoder–decoder with attention model outperforms the LSTM attention while using LSTM networks with attention enhanced training process of the MLP network.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.