Abstract
Background and ObjectiveFacial palsy negatively affects both professional and personal life qualities of involved patients. Classical facial rehabilitation strategies can recover facial mimics into their normal and symmetrical movements and appearances. However, there is a lack of objective, quantitative, and in-vivo facial texture and muscle activation bio-feedbacks for personalizing rehabilitation programs and diagnosing recovering progresses. Consequently, this study proposed a novel patient-specific modelling method for generating a full patient specific head model from a visual sensor and then computing the facial texture and muscle activation in real-time for further clinical decision making. MethodsThe modeling workflow includes (1) Kinect-to-head, (2) head-to-skull, and (3) muscle network definition & generation processes. In the Kinect-to-head process, subject-specific data acquired from a new user in neutral mimic were used for generating his/her geometrical head model with facial texture. In particular, a template head model was deformed to optimally fit with high-definition facial points acquired by the Kinect sensor. Moreover, the facial texture was also merged from his/her facial images in left, right, and center points of view. In the head-to-skull process, a generic skull model was deformed so that its shape was statistically fitted with his/her geometrical head model. In the muscle network definition & generation process, a muscle network was defined from the head and skull models for computing muscle strains during facial movements. Muscle insertion points and muscle attachment points were defined as vertex positions on the head model and the skull model respectively based on the standard facial anatomy. Three healthy subjects and two facial palsy patients were selected for validating the proposed method. In neutral positions, magnetic resonance imaging (MRI)-based head and skull models were compared with Kinect-based head and skull models. In mimic positions, infrared depth-based head models in smiling and [u]-pronouncing mimics were compared with appropriate animated Kinect-driven head models. The Hausdorff distance metric was used for these comparisons. Moreover, computed muscle lengths and strains in the tested facial mimics were validated with reported values in literature. ResultsWith the current hardware configuration, the patient-specific head model with skull and muscle network could be fast generated within 17.16±0.37s and animated in real-time with the framerate of 40 fps. In neutral positions, the best mean error was 1.91 mm for the head models and 3.21 mm for the skull models. On facial regions, the best mean errors were 1.53 mm and 2.82 mm for head and skull models respectively. On muscle insertion/attachment point regions, the best mean errors were 1.09 mm and 2.16 mm for head and skull models respectively. In mimic positions, these errors were 2.02 mm in smiling mimics and 2.00 mm in [u]-pronouncing mimics for the head models on facial regions. All above error values were computed on a one-time validation procedure. Facial muscles exhibited muscle shortening and muscle elongating for smiling and pronunciation of sound [u] respectively. Extracted muscle features (i.e. muscle length and strain) are in agreement with experimental and literature data. ConclusionsThis study proposed a novel modeling method for fast generating and animating patient-specific biomechanical head model with facial texture and muscle activation bio-feedbacks. The Kinect-driven muscle strains could be applied for further real-time muscle-oriented facial paralysis grading and other facial analysis applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.