Abstract
The advent of virtual reality (VR) introduced a paradigm for human-to-human communication in which 3-D shapes can be manipulated in real time in a new kind of computer supported cooperative workspace (CSCW) (Takemura and Kishino 1992). However, mere manipulation — either with 3-D input devices (e.g., the DataGlove™) or with spoken language (Mochizuki and Kishino 1991) — does not do justice to this new paradigm, which could prove to be revolutionary for human-to-human and human-to-machine — communication. This paper discusses the possibility of providing the means for VR-based CSCW participants not only to interactively manipulate, but also to generate and modify 3-D shapes using verbal descriptions, along with simple hand gestures. To this end, the paper also proposes a framework for interactive indexing of knowledge-level descriptions (Newell 1982, Tijerino and Mizoguchi 1993) of human intentions to a symbol-level representation based on deformable superquadrics (Pentland 1986; Horikoshi and Kasahara 1990, Terzopoulos 1991). This framework, at least, breaks ground in integration of natural language with interactive computer graphics.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.