Abstract
For decades, architectural designs are created and communicated by de facto tools; either 2D CAD or 3D CAD. Architects and CAD operators alike use CAD software installed on their personal computers or workstations. The interaction techniques used when using these tools are restricted to the WIMP (windows, icons, mouse, and pointers) paradigm. Graphical User Interface (GUI) and accessibility to the functionalities of the CAD software are thus designed and revolved around this paradigm. In the Multi-Hand Gesture (MHG) interaction paradigm, a non-contact gesture recognition system is used to detect in real-time the hands and fingers movements, and also their positions in 3D space. These gestures are then interpreted and used to execute specific commands or tasks. This paper proposes a framework for a non-contact MHG interaction technique for architectural design. It also discusses 1) the WIMP and a non-contact MHG interaction technique, and 2) the development of an early prototype of a non-contact MHG recognition system. The prototype system consists of a software that is developed to recognize and interpret the multi-hand gestures captured by the Kinect sensor which does real-time scanning of three-dimensional space and depth.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.