Abstract

Recently released, depth-sensing-capable, and moderately priced handheld devices support the implementation of augmented reality (AR) applications without the requirement of tracking visually distinct markers. This relaxed constraint allows for applications with significantly increased augmentation space dimension, virtual object size, and user movement freedom. Being relatively new, there is currently a lack of study on issues concerning direct virtual object manipulation for AR applications on these devices. This paper presents the results from a survey of the existing object manipulation methods designed for traditional handheld devices and identifies potentially viable ones for newer, depth-sensing-capable devices. The paper then describes the following: a test suite that implements the identified methods, test cases designed specifically for the characteristics offered by the new devices, the user testing process, and the corresponding results. Based on the study, this paper concludes that AR applications on newer, depth-sensing-capable handheld devices should manipulate small-scale virtual objects by mapping directly to device movements and large-scale virtual objects by supporting separate translation and rotation modes. Our work and results are the first step in better understanding the requirements to support direct virtual object manipulation for AR applications running on a new generation of depth-sensing-capable handheld devices.

Highlights

  • The Merriam Webster dictionary defines augmented reality (AR) as “an enhanced version of reality created by the use of technology to overlay digital information on an image being viewed through a device”

  • Our results indicate that many of the results from marker AR studies are valid for markerless AR, specific applicability depends on the size of the virtual object being manipulated

  • Recalling that Tests 1 to 3 were designed for small-scale objects whereas Tests 4 to 7 were for larger-scale augmented space and virtual objects, these results suggest that users welcomed the stateless simplicity and one-to-one proxy manipulation of integrated view input (IVI) and Modified IVI (MOD IVI) for small-scale objects

Read more

Summary

Introduction

Interchangeably) information on an image being viewed through a device” (https://www.merriamwebster.com/dictionary/augmented+reality). This paper uses “handheld” and “mobile” interchangeably), such as ubiquitous smartphones or tablet devices, is an effective way of connecting the general public to this technology to promote the creation of next-generation applications [3,4]. Explicitly positioned, predefined visual markers were relied on to establish a correspondence between real and virtual worlds on handheld platforms, e.g., [5,6,7]. These markers must be maintained within the view of an application at all times, are processed dynamically, and tracked in real time [8]. Virtual information is integrated into the physical environment based on the location and orientation of these visual markers

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.