Abstract

This study develops an eye tracking method for autostereoscopic three-dimensional (3D) display systems for use in various environments. The eye tracking-based autostereoscopic 3D display provides low crosstalk and high-resolution 3D image experience seamlessly without 3D eyeglasses by overcoming the viewing position restriction. However, accurate and fast eye position detection and tracking are still challenging, owing to the various light conditions, camera control, thick eyeglasses, eyeglass sunlight reflection, and limited system resources. This study presents a robust, automated algorithm and relevant systems for accurate and fast detection and tracking of eye pupil centers in 3D with a single visual camera and near-infrared (NIR) light emitting diodes (LEDs). Our proposed eye tracker consists of eye–nose detection, eye–nose shape keypoint alignment, a tracker checker, and tracking with NIR LED on/off control. Eye–nose detection generates facial subregion boxes, including the eyes and nose, which utilize an Error-Based Learning (EBL) method for the selection of the best learnt database (DB). After detection, the eye–nose shape alignment is processed by the Supervised Descent Method (SDM) with Scale-invariant Feature Transform (SIFT). The aligner is content-aware in the sense that corresponding designated aligners are applied based on image content classification, such as the various light conditions and wearing eyeglasses. The conducted experiments on real image DBs yield promising eye detection and tracking outcomes, even in the presence of challenging conditions.

Highlights

  • Autostereoscopic three-dimensional (3D) displays provide immersive visual experiences with a realistic sense of the image depth without the need for 3D eyeglasses [1,2]

  • Our 10.1” tablet and 31.5” personal monitor prototypes that use the eye tracking-based directional subpixel rendering algorithm are described in [2,3]. This eye tracking-based autostereoscopic 3D method can be utilized in head-up displays (HUDs) (Figure 1a), which have been increasingly used by the automotive industry

  • Our results demonstrated high accuracy and fast speed regarding the tracking of the position of the eye center in various illumination and user conditions

Read more

Summary

Introduction

Autostereoscopic three-dimensional (3D) displays provide immersive visual experiences with a realistic sense of the image depth without the need for 3D eyeglasses [1,2]. The eye tracking-based autostereoscopic 3D method overcomes these limitations, allows a single-user, seamless 3D experience, and provides higher 3D resolution contents. Our 10.1” tablet and 31.5” personal monitor prototypes that use the eye tracking-based directional subpixel rendering algorithm are described in [2,3]. This eye tracking-based autostereoscopic 3D method can be utilized in head-up displays (HUDs) (Figure 1a), which have been increasingly used by the automotive industry. While two-dimensional (2D) HUD shows augmented reality (AR) information in a 2D virtual plane that causes additional distraction and visual mismatches, Sensors 2020, 20, 4787; doi:10.3390/s20174787 www.mdpi.com/journal/sensors

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call