AbstractEye gaze and expressions are crucial non‐verbal signals in face‐to‐face communication. Visual effects and telepresence demand significant improvements in personalized tracking, animation, and synthesis of the eye region to achieve true immersion. Morphable face models, in combination with coordinate‐based neural volumetric representations, show promise in solving the difficult problem of reconstructing intricate geometry (eyelashes) and synthesizing photorealistic appearance variations (wrinkles and specularities) of eye performances. We propose a novel hybrid representation ‐ ShellNeRF ‐ that builds a discretized volume around a 3DMM face mesh using concentric surfaces to model the deformable ‘periocular’ region. We define a canonical space using the UV layout of the shells that constrains the space of dense correspondence search. Combined with an explicit eyeball mesh for modeling corneal light‐transport, our model allows for animatable photorealistic 3D synthesis of the whole eye region. Using multi‐view video input, we demonstrate significant improvements over state‐of‐the‐art in expression re‐enactment and transfer for high‐resolution close‐up views of the eye region.
Read full abstract