Abstract

Head Mounted Display (HMD) provides an immersive ex-perience in virtual environments for various purposes such as for games and communication. However, it is difficult to capture facial expression in a HMD-based virtual environ-ment because the upper half of user's face is covered up by the HMD. In this paper, we propose a facial expression mapping technology between user and a virtual avatar using embedded optical sensors and machine learning. The distance between each sensor and surface of the face is measured by the optical sensors that are attached inside the HMD. Our system learns the sensor values of each facial expression by neural network and creates a classifier to estimate the current facial expression.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call