Abstract

Deep learning-based intrinsic image decomposition (IID) has gained significant attention in computer vision due to the high efficiency and accuracy of learning-based methods. However, the development of deep learning-based IID methods in the remote sensing field has been limited by the lack of experimental datasets. This article proposes a two-stream encoder-decoder network for the single hyperspectral (HS) image IID task. The proposed network comprises one reflectance estimation subnetwork and one shading estimation subnetwork, which predict intrinsic properties separately. The proposed model introduces three physical losses to enhance performance: 1) In the reflectance estimation subnetwork, the self-similarity loss on the reflectance component is added to satisfy the basic assumption that pixels with similar intensity tend to have a similar reflectance property. 2) In the shading estimation subnetwork, the shading structure loss is added to ensure that the structure of the shading component conforms to physical observation. 3) Reconstruction loss connecting two subnetworks is required to ensure the estimated intrinsic components are physically correct. Finally, to avoid an unreasonable composition, the entire network is initialized by reflectance estimated by the physical model. The quantitative experimental results of intraclass consistency and classification metrics demonstrate that the proposed physical prior-driven unsupervised learning-based IID network outperforms the current available learning or optimization-based approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call