Abstract

Single-view intrinsic decomposition of fabric images is a meaningful and challenging task for fabric analysis. However, it is costly to obtain enough ground truths for supervised training of intrinsic images. In this article, we explore a novel method to decompose the reflectance and shading from fabric images. With the introduction of wavelet transform into the proposed CNN model, this method enables us to exploit the information arising from inherent constraints during training and, by doing so, eliminate the need for ground truth labels. Based on three assumptions, we describe a new training framework for the network, including three types of loss function: prior loss, relative total variation loss, and a loss of constrained shared consistency. The trained model proved to be very efficient, requiring only 0.007 s to process one image. The results show that our method can separate the color and fine texture of fabric images to a certain extent. Finally, we propose several applications of the trained model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.