Abstract

Driver’s gaze information can be crucial in driving research because of its relation to driver attention. Particularly, the inclusion of gaze data in driving simulators broadens the scope of research studies as they can relate drivers’ gaze patterns to their features and performance. In this paper, we present two gaze region estimation modules integrated in a driving simulator. One uses the 3D Kinect device and another uses the virtual reality Oculus Rift device. The modules are able to detect the region, out of seven in which the driving scene was divided, where a driver is gazing at in every route processed frame. Four methods were implemented and compared for gaze estimation, which learn the relation between gaze displacement and head movement. Two are simpler and based on points that try to capture this relation and two are based on classifiers such as MLP and SVM. Experiments were carried out with 12 users that drove on the same scenario twice, each one with a different visualization display, first with a big screen and later with Oculus Rift. On the whole, Oculus Rift outperformed Kinect as the best hardware for gaze estimation. The Oculus-based gaze region estimation method with the highest performance achieved an accuracy of 97.94%. The information provided by the Oculus Rift module enriches the driving simulator data and makes it possible a multimodal driving performance analysis apart from the immersion and realism obtained with the virtual reality experience provided by Oculus.

Highlights

  • IntroductionPublisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.license (https://creativecommons.org/licenses/by/4.0/).Event simulation is increasingly being used in many different areas

  • Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.license.Event simulation is increasingly being used in many different areas

  • We evaluated the Kinect-based and the Oculus-based gaze region estimation using the four methods explained in the previous section

Read more

Summary

Introduction

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.license (https://creativecommons.org/licenses/by/4.0/).Event simulation is increasingly being used in many different areas. Simulators can speed up the process of acquisition of basic abilities and are configured as tools of great learning and research capacity in a wide range of areas such as surgery [1] or Internet of Things (IoT) [2]. Driving simulators make possible the learning and reeducation of drivers through the inclusion of varied routes and traffic situations that can compromise safety [3]. Driving simulators store data that can be analyzed to study the different aspects that play a role on traffic safety and driving ability shortcomings that can lead to traffic dangerous situations. We developed a driving simulator with varied routes (urban and interurban) and traffic events to analyze the driving safety level. The Android application is synchronized with the driving simulator, so that the physiological data can be used in driving research studies. We intended to extend the data extracted from the driver in our simulator to analyze his/her attention state

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call