Abstract

We propose a new approach aimed at sign language animation by skin region detection on an infrared image. To generate several kinds of animations expressing personality and/or emotion appropriately, conventional systems require many manual operations. However, a promising way to realize a lower workload is to manually refine an animation made automatically with a dynamic image of real motion. In the proposed method, a 3D CG model corresponding to a characteristic posture in sign language is made automatically by pattern recognition on a thermal image, and then a person’s hand in the CG model is set. The hand part is made manually beforehand. If necessary, the model can be replaced manually by a more appropriate model corresponding to training key frames and/or the model can be refined manually. In our experiments, a person experienced in using sign language recognized the Japanese sign language of 71 words expressed as animation with 88.3% accuracy, and three persons experienced in using sign language also recognized the sign language animation representing three emotions (neutral, happy and angry) with 88.9% accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call