Two research projects are described that explore the use of spoken natural language interfaces to virtual reality (VR) systems. Both projects combine off-the-shelf speech recognition and synthesis technology with in-house command interpreters that interface to the VR applications. Details about the interpreters and other technical aspects of the projects are provided, together with a discussion of some of the design decisions involved in the creation of speech interfaces. Questions and issues raised by the projects are presented as inspiration for future work. These issues include: requirements for object and information representation in VR models to support natural language interfaces; use of the visual context to establish the interaction context; difficulties with referencing events in the virtual world; and problems related to the usability of speech and natural language interfaces in general.