Abstract

Hand-held smart devices are equipped with powerful processing units, high resolution screens and cameras, that in combination makes them suitable for video see-through Augmented Reality. Many Augmented Reality applications require interaction, such as selection and 3D pose manipulation. One way to perform intuitive, high precision 3D pose manipulation is by direct or indirect mapping of device movement. There are two approaches to device movement interaction; one fixes the virtual object to the device, which therefore becomes the pivot point for the object, thus makes it difficult to rotate without translate. The second approach avoids latter issue by considering rotation and translation separately, relative to the object's center point. The result of this is that the object instead moves out of view for yaw and pitch rotations. In this paper we study these two techniques and compare them with a modification where user perspective rendering is used to solve the rotation issues. The study showed that the modification improves speed as well as both perceived control and intuitiveness among the subjects.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.