Abstract

Deep learning-based methods have made significant impact and demonstrated superior performance for the classical image and video super-resolution (SR) tasks. Yet, deep learning-based approaches to super-resolve the appearance of 3D objects are still sparse. Due to the nature of rendering 3D models, 2D SR methods applied directly to 3D object texture may not be a good approach. In this paper, we propose a rendering loss derived from the rendering of a 3D model and demonstrate its application to the SR task in the context of 3D texturing. Unlike other literature on the 3D appearance SR, no geometry information of the 3D model is required during network inference. Experimental results demonstrate that incorporating the rendering loss during network training outperforms existing state-of-the-art methods for 3D appearance SR. Furthermore, we provide a new 3D dataset consisting of 97 complete 3D models for further research in this field.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call