The rapid advances in the technology and science of presenting spatial sound in virtual, augmented, and mixedreality environments seem to be underrepresented in recent literature. The goal of this special issue of the Virtual Reality Journal is twofold: to provide a state-of-the-art review of progress in spatial sound as applied to virtual reality (VR) and augmented reality (AR), and to stimulate further research in this emerging and important field. In this special issue, we are pleased to present papers representing a range of topics, from basic research on the perception of spatial sound to more applied papers on the use of spatial sound in real-world settings. It is the hope of the guest editors that the special issue will also encourage scientists to incorporate spatial sound into their own research and to take the next step beyond the innovative projects described here. Virtual reality, mixed reality, and augmented reality are often technology-driven. On this point, the recent emergence of affordable head-mounted displays—as exemplified by HTC and Valve Vive, Samsung Gear VR, Sony (PlayStation 4) VR, Samsung-sponsored eye-tracking FOVE, Google Cardboard, and the Facebook-acquired Oculus Rift—signals a mass diffusion of VR-style applications such as games, programs, and visualization. Presumably, tracking, alignment, and omnidirectional capabilities often found in these visual displays will foster similar developments on personalized audio displays. Considering the range of information that human senses can detect, multimodal systems that present users with a coordinated display are needed to increase realism and performance. In R&D of virtual and mixed-reality systems, the audio modality has been somewhat underemphasized, partly because the ‘‘original field’’ of virtual reality focused on the dominant (at least for some tasks) visual modality. For instance, despite its ostensible ambitions for multimodal interfaces, the Virtual Reality journal is still classified by its publisher as ‘‘Image Processing: Computer Imaging, Graphics, and Vision,’’ whereas Springer’s category ‘‘HCI: User Interfaces, HCI and Ergonomics’’ would probably be a better fit. The peer-reviewed articles presented in this special issue represent a broad consideration of themes on spatial sound, including basic research papers which we include to help provide a baseline of established scientific data in the field and ‘‘application’’ papers which are not only interesting but can be used to evaluate how well systems with spatial audio perform in realistic scenarios. These papers will be especially useful for those who design systems for realworld and pragmatic applications. Representing expanding interest in the field, the papers presented in this special issue come from innovative researchers in Asia, Europe, and North America, with a focus on recent advances in spatial ‘‘virtual’’ sound, including spatialized audio interfaces, perception, presence and cognition, navigation and way-finding, and applications of spatialized sound for VR, AR, mixed reality (MR), and presence. A deeper review of many of these topics can be found in a just-published anthology of augmented reality and wearable computers, edited by one of the special issue guest editors (Barfield 2016) and including contributions about spatial sound and augmented audio reality by the other two (Michael Cohen and Julian Villegas). & Michael Cohen mcohen@u-aizu.ac.jp