Abstract

We introduce a method for egocentric videoconferencing that enables hands-free video calls, for instance by people wearing smart glasses or other mixed-reality devices. Videoconferencing portrays valuable non-verbal communication and face expression cues, but usually requires a front-facing camera. Using a frontal camera in a hands-free setting when a person is on the move is impractical. Even holding a mobile phone camera in the front of the face while sitting for a long duration is not convenient. To overcome these issues, we propose a low-cost wearable egocentric camera setup that can be integrated into smart glasses. Our goal is to mimic a classical video call, and therefore, we transform the egocentric perspective of this camera into a front facing video. To this end, we employ a conditional generative adversarial neural network that learns a transition from the highly distorted egocentric views to frontal views common in videoconferencing. Our approach learns to transfer expression details directly from the egocentric view without using a complex intermediate parametric expressions model, as it is used by related face reenactment methods. We successfully handle subtle expressions, not easily captured by parametric blendshape-based solutions, e.g., tongue movement, eye movements, eye blinking, strong expressions and depth varying movements. To get control over the rigid head movements in the target view, we condition the generator on synthetic renderings of a moving neutral face. This allows us to synthesis results at different head poses. Our technique produces temporally smooth video-realistic renderings in real-time using a video-to-video translation network in conjunction with a temporal discriminator. We demonstrate the improved capabilities of our technique by comparing against related state-of-the art approaches.

Highlights

  • Videoconferencing is popular as it portrays a wide range of communication signals, beyond traditional phone calls that are used, to face-to-face conversations that use visual cues such as facial expressions or eye gaze

  • We demonstrate that adapting purely audio-driven methods for face reenactment [Suwajanakorn et al 2017; Thies et al 2020] to our frontalisation task does not suffice since subtle facial expression cues are not uniquely correlated to speech and yet clearly appear in the egocentric video

  • We show that purely audio-driven solutions do not suffice in our egocentric videoconferencing setting since important non-verbal expressions only appear on video

Read more

Summary

Introduction

Videoconferencing is popular as it portrays a wide range of communication signals, beyond traditional phone calls that are used, to face-to-face conversations that use visual cues such as facial expressions or eye gaze. While feasible in controlled and static indoor settings, e.g., when working at your desk, such camera placement is not feasible in many other everyday scenarios where people call each other with mobile devices, especially, when walking outdoors in dynamic environments. In such outdoor settings, or even when walking around or just sitting at home, holding up a camera or mobile phone in front of your face for a long duration to transmit a frontal video of yourself is not viable.

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call