Abstract
AbstractThere have been many reports of concepts for outstanding interfaces combining language with nonverbal media such as facial expressions and gestures, but few attempts have been made to actually build and test such systems. The authors present a new configuration for a visual and audio media‐based human computer interface named “Human Reader” and report on the realization and experimental verification of a portion of the system. The visual component of the Human Reader configuration consists of a subsystem called Human Image Reader, which detects head movement through a Head Reader and hand movement through a Hand Reader, using multiple workstations. Three video cameras placed to the front, side, and overhead record the face and hand movements of the subject, whose head and hand movements are extracted and analyzed in real‐time. The voice is captured by a microphone from a voice recognition unit, which recognizes spoken commands. The image generation system CG Secretary presents an image with varied expressions on the workstation screen, with synchronized vocal output to create a more natural interface between human and computer. Head Reader enables workstation window switching, while Hand Reader enables the user to control electronic presentations. Both applications were tested with excellent results.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.