Abstract

This paper presents a virtual try-on system to correctly visualize 3D objects (e.g., glasses) in the face of a given user. By capturing the image and depth information of a user through a low-cost RGB-D camera, we apply a face tracking technique to detect specific landmarks in the facial image. These landmarks and the point cloud reconstructed from the depth information are combined to optimize a 3D facial morphable model that fits as good as possible to the user's head and face. At the end, we deform the chosen 3D objects from its rest shape to a deformed shape matching the specific facial shape of the user. The last step projects and renders the 3D object into the original image, with enhanced precision and in proper scale, showing the selected object in the user's face. We validate the performance of our system on eight different subjects (four male and four female) and show results numerically and visually. Our results demonstrate that, by fitting a facial model to the user's face, the rendered virtual 3D objects look more realistic.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.