Abstract

Current hand-held smart devices are supplied with powerful processors, high resolution screens, and sharp cameras that make them suitable for Augmented Reality (AR) applications. Such applications commonly use interaction techniques adapted for touch, such as touch selection and multi-touch pose manipulation, mapping 2D gestures to 3D action. To enable direct 3D interaction for hand-held AR, an alternative is to use the changes of the device pose for 6 degrees-of-freedom interaction. In this article we explore selection and pose manipulation techniques that aim to minimize the amount of touch. For this, we explore and study the characteristics of both non-touch selection and non-touch pose manipulation techniques. We present two studies that, on the one hand, compare selection techniques with the common touch selection and, on the other, investigate the effect of user gaze control on the non-touch pose manipulation techniques.

Highlights

  • Smart devices such as cellphones and tablets are highly suitable for video see-through AugmentedReality (AR), as they are equipped with high resolution screens, good cameras, and powerful processing units

  • Touch-based on-screen interaction techniques are commonly used in hand-held video see-through Augmented Reality (AR), where a single touch is used for selection and multi-touch gestures are used for manipulation [7,8], see Figure 1, right

  • The analysis indicates that the manipulation technique significantly affect the task completion time, Wilk’s Lambda = 0.645, F(1,17) = 9.369, p = 0.007, in favour of Fix-user perspective rendering (UPR)

Read more

Summary

Introduction

Smart devices such as cellphones and tablets are highly suitable for video see-through AugmentedReality (AR), as they are equipped with high resolution screens, good cameras, and powerful processing units. In head-mounted AR, based on devices such as the HoloLens, wands or mid-air gestures can be used to provide direct 3D pose manipulation [5,6] by directly mapping 3D gesture to pose change (Figure 1, left). Touch-based on-screen interaction techniques are commonly used in hand-held video see-through Augmented Reality (AR), where a single touch is used for selection and multi-touch gestures are used for manipulation [7,8], see Figure 1, right. Multi-touch techniques provide robust interaction, but they lack the direct 3D connection between gesture and pose This issue affects primarily selecting in depth and manipulating the third dimension, which is not available on a 2D screen and requires the introduction of 2D metaphors that are mapped to

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.