Abstract
We propose a new approach aimed at sign language animation by skin region detection on an infrared image. To generate several kinds of animations expressing personality and/or emotion appropriately, conventional systems require many manual operations. However, a promising way to realize a lower workload is to manually refine an animation made automatically with a dynamic image of real motion. In the proposed method, a 3D CG model corresponding to a characteristic posture in sign language is made automatically by pattern recognition on a thermal image, and then a person’s hand in the CG model is set. The hand part is made manually beforehand. If necessary, the model can be replaced manually by a more appropriate model corresponding to training key frames and/or the model can be refined manually. In our experiments, a person experienced in using sign language recognized the Japanese sign language of 71 words expressed as animation with 88.3% accuracy, and three persons experienced in using sign language also recognized the sign language animation representing three emotions (neutral, happy and angry) with 88.9% accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.