Abstract
The problem of dynamic 3D reconstruction has gained popularity over the last few years with most approaches relying on data driven learning and optimization methods. However this is quite a challenging task because of the need for tracking different features in both space and time?that too of deformable objects-where such robust tracking may not always be possible. A common way to better ground the problem is by using some forms of regularizations primarily on the shape representations. Over the years, mesh-based linear blend skinning models have been the standard for fitting templates of humans to the observed time series data of human deformation. However, this approach suffers from optimization difficulties arising from maintaining a consistent mesh topology. In this paper, a novel algorithm for reconstructing dynamic human shapes has been proposed, which uses only sparse silhouette information. This is achieved by first creating shape models based on the signed distance neural fields which are subsequently optimized via volumetric differentiable rendering to best match the observed data. Several experiments have been carried out in this work to test the robustness of this method and the results show it to be quite robust, outperforming prior state of the art on dynamic human shape reconstruction by 45%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Turkish Journal of Electrical Engineering and Computer Sciences
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.