Abstract
Recovering 3D surfaces based on the photometric stereo is a challenging task, due to the non-Lambertian surface of real-world objects. Although much effort has been made to address this issue, existing photometric stereo methods based on deep learning did not fully consider the influence of global–local features and deep-shallow features on the training process. How to combine multi-feature into a framework effectively to overcome their drawbacks has not been explored. Therefore, we propose a novel multi-feature fusion photometric stereo network (MF-PSN), focusing on both local–global and deep-shallow features fusion. Global–local feature fusion maintains the features under different illuminations and the most salient features of all illuminations, thereby effectively uses the information of each input image. Deep-shallow feature fusion keeps the features from deep and shallow layers with different receptive fields, which effectively improves the accuracy and robustness of the model. Experiments show that multi-feature fusion can make full use of the information of the input image to achieve a better reconstruction of surface normals of the object. Extensive ablation studies and experiments on the widely used DiLiGenT benchmark dataset have well verified the effectiveness of our proposed method. In addition, testing on the Gourd & Apple dataset and Light Stage Data Gallery verifies the generalization of our method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.