Abstract

Objects without prominent textures pose challenges for an automatic 3D model reconstruction and feature point matching. Such objects are common in many industrial applications such as metal defect detection, archeological applications of photogrammetry, and 3D object reconstruction from infrared imagery. Most of the common feature point descriptors fail to match local patches in featureless regions of an object. Various kind of textures requires different feature descriptors for high-quality image matching. Hence, automatic low-textured 3D object reconstruction using Structure from Motion (SfM) methods is challenging. Nevertheless, such reconstruction is possible with the aid of a human operator. Deep learning-based descriptors have outperformed most of common feature point descriptors recently. This paper is focused on the development of a new conditional generative adversarial auto-encoder (GANcoder) based on the deep learning. We use a coder-decoder architecture with four convolutional and four deconvolutional layers as a staring point for our research. Our main contribution is a generative adversarial framework GANcoder for training the auto-encoder on the textureless data. Traditional training approaches using an L1 norm tend to converge to the mean image on the low-textured images. In contrast, we use an adversarial discriminator to provided an additional loss function that is focused on distinguishing real images from the training dataset from the auto-encoder reconstruction. We collected a large GANPatches dataset of feature points from nearly textureless objects to train and evaluate our model and baselines. The dataset includes 16k pairs of image patches. We performed qualitative evaluation of our GANcoder and baselines for two tasks. Firstly, we compare the matching score of the our GANcoder and baselines. Secondly, we evaluate the accuracy of 3D reconstruction of low-textured objects using an SfM pipeline with stereo-matching provided by our GANcoder. The results of the evaluation are encouraging and demonstrate that our model achieves and surpasses the state of the art in the feature matching on low-textured objects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call