Abstract

Researchers are considering the use of eye tracking in head-mounted camera systems, such as Google’s Project Glass. Typical methods require detailed calibration in advance, but long periods of use disrupt the calibration record between the eye and the scene camera. In addition, the focused object might not be estimated even if the point-of-regard is estimated using a portable eye-tracker. Therefore, we propose a novel method for estimating the object that a user is focused upon, where an eye camera captures the reflection on the corneal surface. Eye and environment information can be extracted from the corneal surface image simultaneously. We use inverse ray tracing to rectify the reflected image and a scale-invariant feature transform to estimate the object where the point-of-regard is located. Unwarped images can also be generated continuously from corneal surface images. We consider that our proposed method could be applied to a guidance system and we confirmed the feasibility of this application in experiments that estimated the object focused upon and the point-of-regard.

Highlights

  • Head-mounted camera systems, such as Project Glass by Google, require an eye-based intuitive input method. This applies to other eye-tracking research devices, such as wearable EOG goggles (Bulling, Roggen, & Troster, 2009) and Aided Eyes (Ishiguro, Mujibiya, Miyaki, & Rekimoto, 2010), which have been proposed for daily use

  • We propose a method that estimates the focused object using a corneal surface image with 3D model-based iris tracking for application in a device that is suitable for daily use

  • We developed a wearable device for capturing corneal surface images using a device that is suitable for daily use

Read more

Summary

Introduction

Head-mounted camera systems, such as Project Glass by Google, require an eye-based intuitive input method. This applies to other eye-tracking research devices, such as wearable EOG goggles (Bulling, Roggen, & Troster, 2009) and Aided Eyes (Ishiguro, Mujibiya, Miyaki, & Rekimoto, 2010), which have been proposed for daily use. We propose a userfriendly head-mounted eye camera system, which can extract scene and eye information from the user’s corneal images. Corneal surface images can be archived continuously using a wearable camera, so the scene and eye information can be extracted by model-based eye tracking. The methods used to extract scene images from corneal surface images and for model-based eye tracking are described in Section IV and Section V, respectively. K., Yamakawa, T., Takamatsu, J. and Ogasawara, T. (2014) Estimation of a focused object using a corneal surface image for eye-based interaction sions and outlines our future work

Related work
Results and discussion
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.