Abstract

Recently, the need for research on tracking driver gazes is increasing owing to the development of driver convenience systems, such as autonomous driving or intelligent driver monitoring system, to address traffic accidents caused by negligence. A camera is installed in a vehicle to track the driver’s gaze in vehicle environment. The accuracy of estimating driver gazes in vehicle environments reduces if a motion blur of the driver occurs, owing to vehicular vibrations during driving. Most past studies on gaze-tracking of a driver in a vehicle did not consider the motion blurs in their experiments. To address this concern, we propose a method for improving the accuracy of gaze estimation by deblurring the blurred images of a driver from the vehicle. This study is the first attempt to calculate a driver’s gaze by deblurring a motion blurred image with CycleGAN, whereas simultaneously using the image information from the two cameras in the vehicle. In previous studies, multiple deep CNNs were used for obtaining the images of a driver’s eyes and face. In this study, information obtained from the two cameras in the vehicle are integrated into an image with three channels and thereafter deblurred, consequently reducing the time required for training. Whereas in previous studies the gaze position was not calculated for severe blurs by measuring the level of blur from the input image, the gaze position was calculated for all the input images in this study. From the database (Dongguk blurred gaze database (DBGD)) from 26 drivers in actual vehicles and the Columbia gaze dataset (CAVE-DB) that is an open database, the proposed method exhibited greater accuracy than the existing methods.

Highlights

  • There has been a high demand for research on identifying a driver’s state to address traffic accidents caused by the negligence of drivers

  • We propose a method for improving the accuracy of gaze estimation by CycleGAN-based deblurring of the blurred images of a driver from the vehicle

  • Because of the risk of accident associated with the driver gazing at the 15 regions if driving, the experiment was conducted in a stationary vehicle with the engine switched on

Read more

Summary

Introduction

There has been a high demand for research on identifying a driver’s state to address traffic accidents caused by the negligence of drivers. A feature that can indicate a driver’s state during driving is the driver’s gaze information, which can help identify drowsy driving, attentiveness on the road, and using mobile phones. Previous researches conducted indoors predominantly involve restricting the movement of a driver’s head, reducing motion blur compared to that from outdoor environments. To track a driver’s gaze, a camera is used to acquire the driver’s image in a moving vehicle. A motion blur is inevitably generated owing to vibrations of the vehicle, causing the inaccurate detection of face and eye regions, and reducing the accuracy of the gaze-tracking.

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.