Abstract

In autonomous driving vehicles, the driver can engage in non-driving-related tasks and does not have to pay attention to the driving conditions or engage in manual driving. If an unexpected situation arises that the autonomous vehicle cannot manage, then the vehicle should notify and help the driver to prepare themselves for retaking manual control of the vehicle. Several effective notification methods based on multimodal warning systems have been reported. In this paper, we propose an advanced method that employs alarms for specific conditions by analyzing the differences in the driver’s responses, based on their specific situation, to trigger visual and auditory alarms in autonomous vehicles. Using a driving simulation, we carried out human-in-the-loop experiments that included a total of 38 drivers and 2 scenarios (namely drowsiness and distraction scenarios), each of which included a control-switching stage for implementing an alarm during autonomous driving. Reaction time, gaze indicator, and questionnaire data were collected, and electroencephalography measurements were performed to verify the drowsiness. Based on the experimental results, the drivers exhibited a high alertness to the auditory alarms in both the drowsy and distracted conditions, and the change in the gaze indicator was higher in the distraction condition. The results of this study show that there was a distinct difference between the driver’s response to the alarms signaled in the drowsy and distracted conditions. Accordingly, we propose an advanced notification method and future goals for further investigation on vehicle alarms.

Highlights

  • Driving is a complicated task comprising a range of activities such as pathfinding, potential risk detection, and longitudinal and lateral vehicle operation in a continuously changing traffic environment [1]

  • Descriptive statistical analysis was performed on the dependent variables, and normality was tested through the Kolmogorov–Smirnov test

  • Compared to the driver in the drowsy state, those in the distracted state experienced a higher increase in the cognitive load after the alarm was signaled

Read more

Summary

Introduction

Driving is a complicated task comprising a range of activities such as pathfinding, potential risk detection, and longitudinal and lateral vehicle operation in a continuously changing traffic environment [1]. Large amounts of information can be provided through advanced driver assistance systems, in-vehicle information systems, and other digital devices for convenience and to ensure the safety of the driver [3]. Autonomous driving can reduce the accident risk attributed to human errors (including driver distraction) and, in doing so, assist in ensuring safe driving. Autonomous vehicles are divided into levels ranging from 0 to 5, based on their automation levels. Level 5 involves complete automation in which the driver’s participation in the driving is not required at all. Level 3 is a conditional automation level in which autonomous driving is enabled within specific environments. If autonomous driving is conducted under the driving conditions satisfying level 3 requirements, the driver does not need to monitor the driving environment. When the conditions change to those in which autonomous driving is not possible, the vehicle provides a takeover request (TOR) to the driver, who must drive the vehicle manually [6]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call