Abstract

Reproducing the appearance of arbitrary layered materials has long been a critical challenge in computer graphics, with regard to the demanding requirements of both physical accuracy and low computation cost. Recent studies have demonstrated promising results by learning-based representations that implicitly encode the appearance of complex (layered) materials by neural networks. However, existing generally-learned models often struggle between strong representation ability and high runtime performance, and also lack physical parameters for material editing. To address these concerns, we introduce MetaLayer , a new methodology leveraging meta-learning for modeling and rendering layered materials. MetaLayer contains two networks: a BSDFNet that compactly encodes layered materials into implicit neural representations, and a MetaNet that establishes the mapping between the physical parameters of each material and the weights of its corresponding implicit neural representation. A new positional encoding method and a well-designed training strategy are employed to improve the performance and quality of the neural model. As a new learning-based representation, the proposed MetaLayer model provides both fast responses to material editing and high-quality results for a wide range of layered materials, outperforming existing layered BSDF models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.