Abstract

We investigate the use of photometric invariance and deep learning to compute intrinsic images (albedo and shading). We propose albedo and shading gradient descriptors which are derived from physics-based models. Using the descriptors, albedo transitions are masked out and an initial sparse shading map is calculated directly from the corresponding RGB image gradients in a learning-free unsupervised manner. Then, an optimization method is proposed to reconstruct the full dense shading map. Finally, we integrate the generated shading map into a novel deep learning framework to refine it and also to predict corresponding albedo image to achieve intrinsic image decomposition. By doing so, we are the first to directly address the texture and intensity ambiguity problems of the shading estimations. Large scale experiments show that our approach steered by physics-based invariant descriptors achieve superior results on MIT Intrinsics, NIR-RGB Intrinsics, Multi-Illuminant Intrinsic Images, Spectral Intrinsic Images, As Realistic As Possible, and competitive results on Intrinsic Images in the Wild datasets while achieving state-of-the-art shading estimations.

Highlights

  • Intrinsic image decomposition is the inverse problem of recovering the image formation components, such as reflectance and shading (Barrow and Tenenbaum, 1978)

  • In this paper, we investigate the use of photometric invariance and deep learning to compute intrinsic images

  • The results show that comparing with the deep learning based estimations, our proposed models achieves better performance at generating albedo and shading maps on the dataset

Read more

Summary

Introduction

Intrinsic image decomposition is the inverse problem of recovering the image formation components, such as reflectance and shading (Barrow and Tenenbaum, 1978). Classifying image gradients into albedo or shading is not a trivial task due to various photometric effects such as strong shadow casts, illuminant color, surface geometry changes or weak albedo transitions. We investigate the use of photometric invariance and deep learning to compute intrinsic images (albedo and shading). Albedo transitions are masked out and an initial shading map is calculated directly from the corresponding RGB image gradients in a learning-free manner (unsupervised). We integrate the shading map into a deep learning model to achieve full intrinsic image decomposition. 1. We are the first to use photometric invariance and deep learning to address the intrinsic image decomposition task. 4. We propose a novel deep learning model to leverage the physics-based shading map for the intrinsic image decomposition task. 6. we extend the dataset of Baslamisli et al (2018b) from 15,000 to 50,000 images to train our models, which will be publicly available

Related work
Image formation model
Albedo gradients
Shading gradients
Shading
Intrinsic image decomposition
Experiments and evaluation
Evaluations on object-level datasets
Evaluations on scene-level datasets
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.