It gives us great pleasure to introduce this special issue of the Springer Virtual Reality Journal, dedicated to research in augmented reality (AR) in light of the growing interest in the field. Over 40 years ago, Ivan Sutherland developed the first AR application that allowed virtual images to be seamlessly overlaid onto the real world. Since then, much of the research in the field has been focused on the enabling technology for providing AR experiences, such as tracking, display devices, and software systems. However, in the call for papers, we asked for work with a focus on augmented reality experiences, such as new techniques for interacting with AR, methods for evaluating the quality of the experience, better tools for creating AR experiences for nonprogrammers and other related topics. We received more than thirty submissions, most of which were of a very high quality, and we are delighted to present the final selection of eleven papers. Each of these papers were reviewed by a least four reviewers and have been revised to address the reviewers feedback before appearing here. The papers can be grouped into three categories: Tracking and Projection Methods, Interaction Techniques, and Interface Design and Systems. In the Tracking group, (Lieberknecht et al. 2011) present methods for benchmarking template-based computer vision tracking algorithms for AR. Template-based tracking has been the most popular form of vision-based AR tracking for the past 10 years and their work gives valuable guidance on how to benchmark the performance of such systems. Uchiyama (2011) discusses how to do camera tracking of online learning of key points using a Locally Likely Arrangement Hashing technique which looks promising. Finally, (Nagase et al. 2011) shows how model-based optimal projector selection in multiprojection environment can be used to compensate for dynamic defocus and occlusion in projected imagery. The Interaction papers presented a range of different ways of interacting with AR content. Lee et al.’s work (2011) explores tangible interaction methods and, in particular, how two-handed tangible techniques can be used for arranging AR blocks. In their paper, they show how very natural block arranging methods with real blocks can be used for intuitive manipulation of virtual content. Iwai and Sato’s paper on the Limpid Desk (2011) presents an interesting AR projectionbased interface for seeing virtual imagery projected into real documents. They also use gesture-based interaction to enable touch sensing and browsing through the document stack. Finally, the (Ajanki et al. 2011) paper explores how to develop context-sensitive interaction techniques. Using position and gaze sensing, their system can automatically infer user’s interest in people and topics, and display AR content reflecting their interest and so support implicit rather than explicit interaction. The largest selection of papers is in the Interface Design and AR Systems group. Livingston’s paper (2011) presents general user interface guidelines for military AR applications and, in particular, for presenting outdoor AR content. Kim (2011) describes a related system which allows virtual video to be overlaid on aerial maps. The In-Place AR paper of (Hagbi et al. 2011) also shows an interesting map-based M. Billinghurst (&) HIT Lab NZ, University of Canterbury, Christchurch, New Zealand e-mail: mark.billinghurst@hitlabnz.org
Read full abstract