Abstract

ABSTRACT Eye tracking, or pointing, in head-mounted displays enables new input modalities for point-select tasks. The goal of this paper is to explore the Fitts’ modeling of the eye-based selection in a virtual reality environment with controller-based input, providing the baseline for two types of eye-based interaction (dwell and physical trigger) in both three-dimensional and two-dimensional environment. In general, the controller-based interaction offered the highest throughput, best accuracy, and preferences of most participants. The eye-trigger interaction performed roughly between the other two. However, the performance difference between the three interaction modes had become smaller when it comes to three-dimensional targets. The performance of eye-movement interactions had been slightly better in terms of accuracy. Generally speaking, eye-based interaction still has a long way to go before becoming one of the mainstream interaction modalities in virtual reality due to the absence of a more stable and precise eye-tracking device with better calibration methods. Still, in some specific virtual reality environments, eye-based interaction has irreplaceable potential.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.