Abstract
This paper proposes a novel human-computer interaction system exploiting gesture recognition. It is based on the combined usage of an head-mounted display and a multi-modal sensor setup including also a depth camera. The depth information is used both to seamlessly include augmented reality elements into the real world and as input for a novel gesture-based interface. Reliable gesture recognition is obtained through a real-time algorithm exploiting novel feature descriptors arranged in a multi-dimensional structure fed to an SVM classifier. The system has been tested with various augmented reality applications including an innovative human-computer interaction scheme where virtual windows can be arranged into the real world observed by the user.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.