Abstract

Research into the prevention of driver inattention by detecting the driver gaze has become more vital, as traffic accidents due to driver inattention have increased. In a vehicle environment, the conventional gaze-detection methods include detecting the driver gaze using single or multiple cameras. When a single camera is used to detect the driver gaze, excessive rotation of the driver's head may prevent the eye region from being accurately detected, thereby reducing the gaze-detection accuracy. To address this issue, researchers previously attempted gaze detection using dual cameras. However, these methods selectively use the information obtained from each camera; thus, accuracy improvement is limited because the information is not simultaneously used. In addition, the processing complexity increases when images obtained from dual cameras are simultaneously processed. Accordingly, this paper proposes a method to detect the driver's gaze position in the vehicle. This is the first study to calculate the driver gaze via a deep convolutional neural network (CNN) that simultaneously uses image information acquired from the dual near-infrared light cameras. Previous research selectively used one of the images acquired from the dual cameras, and the existing CNN-based gaze-detection methods use multiple deep CNNs for the driver eyes and facial images. However, the proposed method uses one CNN model that integrates all information acquired from the dual cameras into one three-channel image and uses it as an input for the network, thereby increasing the recognition reliability and reducing the computational cost. We conducted experiments based on a self-built driver database that comprised the images from 26 participants (Dongguk dual-camera-based gaze database) and the Columbia gaze dataset, which is an open database. The results demonstrate that the proposed method shows better performance than the existing methods.

Highlights

  • Investigating the status information of drivers has recently become necessary

  • Because a risk was present in which the driver could cause an accident while gazing at the regions shown in Figure 5, we instead acquired images when the actual vehicle moved to various places such as roads and parking lots

  • In contrast to the conventional studies on the driver gaze classification in a vehicle environment, the image information obtained from the front and side cameras were simultaneously used, whereas the driver gaze was estimated using a combined three-channel image used as input to deep residual network (ResNet)

Read more

Summary

Introduction

A driver gaze provides some of the most important information to understand the driver status during driving because the driver gaze makes possible the determination of whether the driver is facing forward, whether devices in the vehicle are being used, or the driver current condition. User calibration was performed to correct the difference between the gaze positions (kappa angle) of the user and pupil and because the eyeball size differs from person to person. These gaze points for calibration could not be displayed inside a vehicle.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call