Abstract

The realistic fusion of human bodies and virtual lighting environments has important application value in digital tourism, virtual conferences, and other fields. To achieve this goal, it is necessary not only to reconstruct the geometry of the human body but also to estimate the reflectance for the human relighting in a virtual environment. Existing methods only focus on either reconstructing the human body geometry or relighting the human body under specific poses and cannot accommodate new poses in fusion-relighting images. In this paper, we propose a novel approach that enables the relighting of new poses from multiview human videos under unknown illumination, whereby a relationship is established between the observation space and canonical space through a pose-mapping process that connects the geometry and reflectance at different temporal and spatial locations. The geometry and reflectance results obtained from the neural fields are input into the physically based renderer along with the optimized environment map. Human relighting in new poses can be generated by inputting skeletal poses, and the entire pipeline can be trained in a self-supervised manner. The proposed technique was verified on multiple datasets. The experimental results show that the proposed method is capable of fulfilling the goal of a virtual lighting environment and real human fusion under new human poses, outperforming recent methods both subjectively and quantitatively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call