Abstract

Eye pupil tracking is important for augmented reality (AR) three-dimensional (3D) head-up displays (HUDs). Accurate and fast eye tracking is still challenging due to multiple driving conditions with eye occlusions, such as wearing sunglasses. In this paper, we propose a system for commercial use that can handle practical driving conditions. Our system classifies human faces into bare faces and sunglasses faces, which are treated differently. For bare faces, our eye tracker regresses the pupil area in a coarse-to-fine manner based on a revised Supervised Descent Method based eye-nose alignment. For sunglasses faces, because the eyes are occluded, our eye tracker uses whole face alignment with a revised Practical Facial Landmark Detector for pupil center tracking. Furthermore, we propose a structural inference-based re-weight network to predict eye position from non-occluded areas, such as the nose and mouth. The proposed re-weight sub-network revises the importance of different feature map positions and predicts the occluded eye positions by non-occluded parts. The proposed eye tracker is robust via a tracker-checker and a small model size. Experiments show that our method achieves high accuracy and speed, approximately 1.5 and 6.5 mm error for bare and sunglasses faces, respectively, at less than 10 ms on a 2.0GHz CPU. The evaluation dataset was captured indoors and outdoors to reflect multiple sunlight conditions. Our proposed method, combined with AR 3D HUDs, shows promising results for commercialization with low crosstalk 3D images.

Highlights

  • A UGMENTED reality (AR) three-dimensional (3D) head-up displays (HUDs) are a promising technology for next-generation assistive driving systems

  • Accurate realtime pupil tracking while driving with an AR 3D HUD is challenging due to multiple real-world driving conditions, such as varying light conditions, head pose changes, eyeglasses reflection, and eye occlusion caused by wearing sunglasses (Figure 1b)

  • We extended our previous studies [9], [10], which were based on the fast Supervised Descent Method (SDM) [11] algorithm with Scale-Invariant Feature Transform (SIFT) [12]

Read more

Summary

Introduction

A UGMENTED reality (AR) three-dimensional (3D) head-up displays (HUDs) are a promising technology for next-generation assistive driving systems. While two-dimensional (2D) HUDs can cause additional distractions and visual mismatches between real-world and virtual objects, AR 3D HUDs can overlap 3D visual information directly on the road after 3D depth adjustments [1], [2] (Figure 1a) In such systems, an autostereoscopic 3D display [3]– [5] is important to provide the user with a realistic sense of the image depth without the need of 3D eyeglasses. Accurate realtime pupil tracking while driving with an AR 3D HUD is challenging due to multiple real-world driving conditions, such as varying light conditions, head pose changes, eyeglasses reflection, and eye occlusion caused by wearing sunglasses (Figure 1b) These challenges become even more difficult to overcome under limited vehicle system resources

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call