Abstract

Driver inattention is the primary cause of vehicle accidents; hence, manufacturers have introduced systems to support the driver and improve safety; nonetheless, advanced driver assistance systems (ADAS) must be properly designed not to become a potential source of distraction for the driver due to the provided feedback. In the present study, an experiment involving auditory and haptic ADAS has been conducted involving 11 participants, whose attention has been monitored during their driving experience. An RGB-D camera has been used to acquire the drivers’ face data. Subsequently, these images have been analyzed using a deep learning-based approach, i.e., a convolutional neural network (CNN) specifically trained to perform facial expression recognition (FER). Analyses to assess possible relationships between these results and both ADAS activations and event occurrences, i.e., accidents, have been carried out. A correlation between attention and accidents emerged, whilst facial expressions and ADAS activations resulted to be not correlated, thus no evidence that the designed ADAS are a possible source of distraction has been found. In addition to the experimental results, the proposed approach has proved to be an effective tool to monitor the driver through the usage of non-invasive techniques.

Highlights

  • The vast majority of vehicle crashes is due to driver’s inattention [1]

  • Active safety brake, parking systems, lane change warning are just a subset of the systems known as advanced driver assistance systems (ADAS) [4], which aim to support the driver in the event of a lapse in attention; driver’s inattention could be caused by excessive automatic support leading to relaxation, as testified by several studies such as Gaspar et al [5], and even ADAS may become a source of distraction for the driver [6]

  • The research community has increasingly focused on virtual reality (VR) simulators: for example, Bozkir et al [8] aimed to use VR to train drivers in critical situations, Caruso et al [9] assessed the impact of the level of detail (LOD) on the drivers’ behavior, Gaweesh et al [10] evaluated safety performance of connected vehicles in mitigating the risk of secondary crashes, and Bakhshi et al [11] focused on scenarios which involved truck drivers

Read more

Summary

Introduction

The vast majority of vehicle crashes is due to driver’s inattention [1]. the phenomenon has become a research problem referred to as DADA, driver attention prediction in driving accident scenario [2,3].To address this danger and ensure driving safety, several monitoring and control tools have been introduced within the vehicles over the years. Studies for enhancing safe driving encounter the issue that experimental validity should not be achieved at the expense of the safety of the humans involved in the experiment [7], so they must be simulation-based For this reason, the research community has increasingly focused on virtual reality (VR) simulators: for example, Bozkir et al [8] aimed to use VR to train drivers in critical situations, Caruso et al [9] assessed the impact of the level of detail (LOD) on the drivers’ behavior, Gaweesh et al [10] evaluated safety performance of connected vehicles in mitigating the risk of secondary crashes, and Bakhshi et al [11] focused on scenarios which involved truck drivers. Researchers must be careful to make their driver monitoring algorithms robust to the challenges introduced in naturalistic driving conditions such as lighting changes, occlusions and head pose, which are not trivial to be reproduced in a simulator [12]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call