Abstract

The aim of this paper is to attain a fast and realistic 3D face reconstruction, when only a limited set of images taken from multiple views, is available. Instead of relying merely on 2D images as suggested by image-based modeling approaches, we investigate several shape-from-shading (SFS) techniques, which have been studied extensively in computer vision research. This is because we believe that a nearly complete set of images required by image-based modeling techniques is rarely available in real world applications. In this paper, we investigate the effectiveness of three different SFS algorithms to provide partial 3D shapes of the face to be reconstructed. Each algorithm is selected from three different classes of SFS technique, i.e., linear, propagation, and minimization approaches. The reconstruction process is performed by our novel neural network learning scheme, which is able to successively refine the polygon vertices parameter of an initial 3D shape, based on depth maps of several calibrated images. To evaluate the reconstruction result based on those SFS techniques, we measure average vertex-error and pixel-error compared to actual 3D data obtained by 3D scanner device. We also compared these result with those obtained by using only 2D images. In addition, we also measure total computational time needed in the reconstruction process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call