Abstract

This paper presents the extended work on LazyNav, a head-free, eyes-free and hands-free mid-air ground navigation control model presented at the IEEE 3D User Interfaces (3DUI) 2015, in particular with a new application to the head-mounted display (HMD). Our mid-air interaction metaphor makes use of only a single pair of the remaining tracked body elements to tailor the navigation. Therefore, the user can navigate in the scene while still being able to perform other interactions with her hands and head, e.g., carrying a bag, grasping a cup of coffee, or observing the content by moving her eyes and locally rotating her head. We design several body motions for navigation by considering the use of non-critical body parts and develop assumptions about ground navigation techniques. Through the user studies, we investigate the motions that are easy to discover, easy to control, socially acceptable, accurate and not tiring. Finally, we evaluate the desired ground navigation features with a prototype application in both a large display (LD) and a HMD navigation scenarios. We highlight several recommendations for designing a particular mid-air ground navigation technique for a LD and a HMD.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.