Abstract
We propose a video editing system that allows a user to apply a time-coherent texture to a surface depicted in the raw video from a single uncalibrated camera, including the surface texture mapping of a texture image and the surface texture synthesis from a texture swatch. Our system avoids the construction of a 3D shape model and instead uses the recovered normal field to deform the texture so that it plausibly adheres to the undulations of the depicted surface. The texture mapping method uses the nonlinear least-squares optimization of a spring model to control the behavior of the texture image as it is deformed to match the evolving normal field through the video. The texture synthesis method uses a coarse optical flow to advect clusters of pixels corresponding to patches of similarly oriented surface points. These clusters are organized into a minimum advection tree to account for the dynamic visibility of clusters. We take a rather crude approach to normal recovering and optical flow estimation, yet the results are robust and plausible for nearly diffuse surfaces such as faces and t-shirts.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Visualization and Computer Graphics
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.