Abstract

Enabling the effective representation of an object’s position and depth in augmented reality (AR) is crucial not just for realism, but also to enable augmented reality’s wider utilization in real world applications. Domains such as architecture and building design cannot leverage AR’s advantages without the effective representation of position. Prior work has examined how the human visual system perceives and interprets such cues in AR. However, it has focused on application systems that only use a single AR modality, i.e., head-mounted display, tablet/handheld, or projection. However, given the respective limitations of each modality regarding shared experience, stereo display, field of view, etc., prior work has ignored the possible benefits of utilizing multiple AR modalities together. By using multiple AR systems together, we can attempt to address the deficiencies of one modality by leveraging the features of other modalities. This work examines methods for representing position in a multi-modal AR system consisting of a stereo head-mounted display and a ceiling mounted projection system. Given that the AR content is now rendered across two separate AR realities, how does the user know which projected object matches the object shown in their head-mounted display? We explore representations to correlate and fuse objects across modalities. In this paper, we review previous work on position and depth in AR, before then describing multiple representations for head-mounted and projector-based AR that can be paired together across modalities. To the authors’ knowledge, this work represents the first step towards utilizing multiple AR modalities in which the AR content is designed directly to compliment deficiencies in the other modality.

Highlights

  • As augmented reality (AR) becomes more available to end users and industry, the limitations and restrictions of the technology permeate from research questions into real world problems that impact end users

  • Whilst the mainstream focus has recently has been on handheld AR, and more recently head-mounted-display (HMD) AR, spatial augmented reality (SAR) [1] presents unique attributes compared to the HMDs and handheld AR

  • Whilst polygon and shader effects are continually increasing in quality, it is the subtle visual indicators that impact the effectiveness of the AR content

Read more

Summary

Introduction

As augmented reality (AR) becomes more available to end users and industry, the limitations and restrictions of the technology permeate from research questions into real world problems that impact end users. CADwalk [3] is a commercial software platform that utilizes SAR to provide a physically immersive room for viewing and editing life-size building blueprints (Figure 1). Using multiple downward facing, ceiling mounted projectors, life-size blueprints can be visualized and edited on the floor, allowing users to physically walk around a 1:1 scale representation of the building. AR systems need to ensure all cues interpreted by the human eye are faithfully recreated, especially where used to identify position. These can be broken down into close personal space and further distances, each using spatial cues differently [5]. Due to the importance of these indicators to user perception, redundant indicators should be used to guard against the failure of an individual indicator, and correct overall understanding and judgement [4]

Objectives
Methods
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.