Abstract
Estimating a depth map and, at the same time, predicting the 3D pose of an object from a single 2D color image is a very challenging task. Depth estimation is typically performed through stereo vision by following several time-consuming stages, such as epipolar geometry, rectification and matching. Alternatively, when stereo vision is not useful or applicable, depth relations can be inferred from a single image as studied in this paper. More precisely, deep learning is applied in order to solve the problem of estimating a depth map from a single image. Then, that map is used for predicting the 3D pose of the main object depicted in the image. The proposed model consists of two successive neural networks. The first network is based on a Generative Adversarial Neural network (GAN). It estimates a dense depth map from the given color image. A Convolutional Neural Network (CNN) is then used to predict the 3D pose from the generated depth map through regression. The main difficulty to jointly estimate depth maps and 3D poses using deep networks is the lack of training data with both depth and viewpoint annotations. This contribution assumes a cross-domain training procedure with 3D CAD models corresponding to objects appearing in real images in order to render depth images from different viewpoints. These rendered images are then used to guide the GAN network to learn the mapping from the image domain to the depth domain. By exploiting the dataset as a source of training data, the proposed model outperforms state-of-the-art models on the PASCAL 3D+ dataset. The code of the proposed model is publicly available at https://github.com/SaddamAbdulrhman/Depth-and-Viewpoint-Estimation/tree/master.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Circuits and Systems for Video Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.