Abstract
Despite many supporting systems, so-called advanced driver assistance systems (ADAS), human error is still by far the main cause of traffic accidents. In the development of new driver assistance concepts, systems and functions monitoring the driver while driving and classifying their behavior in the driving context are therefore increasingly coming to the fore. In this context, this dissertation deals with the question what the driver perceived in their environment. For this purpose, the information of the environment model has to be merged with measured gaze data. Given a precise calibration of the individual sensors, visual fixations of the driver on road users are modeled. Based on the realization that simple geometric approaches cannot answer this question of visual fixations precisely enough, characteristics of human gaze behavior are identified and integrated as model knowledge into a probabilistic tracking approach. This tracking model considers every object which is classified as a dynamic object and thus as a potential road user by the vehicle's environment perception module as a possible hypothesis for the driver's current visual attention target. In addition, two different motion models of eye movements for fixations and saccades are integrated, so that the estimation of the gaze target can follow the special dynamics of human gaze and recognize specific connected time spans. The advantage of this novel resulting Multi-Hypothesis Multi-Model (MHMM) filter is the confidence which is characteristic to probabilistic approaches, indicating the probability of each object being fixated by the driver. A challenge is the evaluation of such new algorithms. For the statement which object the driver actually visually fixates, ground truth information is necessary. However, this cannot be covered by questionnaires. For this reason, a reference data set is created in which the recordings of the remote eye-tracking system installed in the vehicle are extended with the data of wearable eye-tracking glasses. With the help of these recordings, different model approaches are now compared on a quantitative and not only qualitative basis. The prototypical City Assistant System, which was co-developed as part of this work, shows how the newly gained information about the driver's gaze behavior can be incorporated into new assistance concepts. It adapts its warning and recommendation cascade in urban intersection scenarios to the driver's driving style and gaze behavior. Through this orientation towards the driver's need for support, the City Assistant System contributes to higher acceptance of warning and recommending systems and ultimately to increased road safety.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.