Abstract

AbstractDue to the increasing demand for virtual avatars, there has been a recent growth in the research and development of frameworks for realistic digital humans, which create a demand for realistic and adaptable facial motion capture systems. Most frameworks belong to private companies or represent high investments, which is why the creation of democratized solutions is relevant for the growth of digital human content creation. This research work proposes a facial motion capture framework for digital humans with the use of machine learning for facial codification intensity regression. The main focus is to use coded face movement intensities to generate realistic expressions on a digital human. The ablation studies performed on the regression models show that Neural Networks, using Histogram of Oriented Gradients as features, and with person-specific normalization, present overall better performance against other methods in the literature. With an RMSE of 0.052, the proposed framework offers reliable results that can be rendered in the face of a MetaHuman.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.