Shape recovery of an object based on shading variations resulting from different light sources has recently been reconsidered. Improvements have been made that allow for the photometric stereo approach to serve as a competitive alternative to other shape reconstruction methods. However, most photometric stereo methods tend to ignore factors that are relevant in practical applications. The setup considered in this paper tackles photometric stereo reconstruction in the case of a specific near-field imaging. This means that both the camera and the light sources are close to the imaged object, where close can be loosely considered as a setup having similar distances between lights, camera, and object. The known challenges that characterize the problem involve perspective viewing geometry, point light sources, and images that may include shadowed regions. Here, we pay special attention to the question of how to faithfully model these aspects and at the same time design an efficient and robust numerical solver. We present a mathematical formulation that integrates the above assumptions into a single coherent model based on quasi-linear PDEs. The well-posedness is proved showing uniqueness of a weak (i.e., Lipschitz continuous) solution. The surface reconstruction in our near-field scenario can then be executed efficiently in linear time. The merging strategy of the irradiance equations provided for each light source allows us to consider a characteristic expansion model which enables the direct computation of the surface. We evaluate several types of light attenuation models with a nonuniform albedo and noise on synthetic data. We also demonstrate the proposed method on surface reconstruction of real data using three images, each one taken with a different light source by a working prototype. We demonstrate the accuracy of the proposed method compared to other methods that ignore the near-field setup and assume distant, parallel beam light sources.
Read full abstract