Abstract
Preparing datasets for use in the training of real-time face tracking algorithms for HMDs is costly. Manually annotated facial landmarks are accessible for regular photography datasets, but introspectively mounted cameras for VR face tracking have incompatible requirements with these existing datasets. Such requirements include operating ergonomically at close range with wide angle lenses, low-latency short exposures, and near infrared sensors. In order to train a suitable face solver without the costs of producing new training data, we automatically repurpose an existing landmark dataset to these specialist HMD camera intrinsics with a radial warp reprojection. Our method separates training into local regions of the source photos, i.e., mouth and eyes for more accurate local correspondence to the mounted camera locations underneath and inside the fully functioning HMD. We combine per-camera solved landmarks to yield a live animated avatar driven from the user’s face expressions. Critical robustness is achieved with measures for mouth region segmentation, blink detection and pupil tracking. We quantify results against the unprocessed training dataset and provide empirical comparisons with commercial face trackers.
Highlights
Head mounted displays (HMDs) are used broadly in many applications [IDC 2017], such as animation [Olszewski et al 2016], content creation [Vogel et al 2018], medical applications [Egger et al 2017], serious games [Gamito et al 2017], object interaction [Figueiredo et al 2018], and education [Dinis et al 2017]
To achieve facial feature tracking with ergonomically mounted introspective cameras within a full functional VR HMD source dataset warping to target camera intrinsics, 2) training of the shape detector using the new dataset for sub-regions, 3) additional mouth and eye detection refinements
This paper presents a radial warp based image retargeting to match casual photography labeled images to the lens distortion of cameras integrated into a low cost HMD
Summary
Head mounted displays (HMDs) are used broadly in many applications [IDC 2017], such as animation [Olszewski et al 2016], content creation [Vogel et al 2018], medical applications [Egger et al 2017], serious games [Gamito et al 2017], object interaction [Figueiredo et al 2018], and education [Dinis et al 2017]. The HMD to capture motions from parts of the user’s face These works typically use machine learning approaches to estimate the face pose from sensor data, and require many captures of different users wearing the HMD to train the algorithm, demanding great manual effort to acquire each dataset. With each hand labeled dataset one can train a method to predict the landmark locations from a camera image of a face in real-time, such as Dlib’s real-time face predictor [King 2009] [Kazemi and Sullivan 2014]. Our novel training dataset preparations are validated upon this facial landmark regression method for use in an HMD. It is extendable to other face tracking algorithms such as Olszewski et al [2016], as we apply warping on source face images prior to training, and apply further refinements in post, without altering the function of the core tracking solver
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.