Abstract

Increasingly complex 3D CAD models are essential during different life-cycle stages of modern engineering projects. Even though these models contain several repeated geometries, instancing information is often not available, resulting in increased requirements for storage, transmission, and rendering. Previous research have successfully applied shape matching techniques to identify repeated geometries and thus reduce memory requirements and improve rendering performance. However, these approaches require consistent vertex topology, prior knowledge about the scene, and/or the laborious creation of labeled datasets. In this paper, we present an unsupervised deep-learning method that overcomes these limitations and is capable of identifying repeated geometries and computing their instancing transformations. The method also guarantees a maximum visual error and preserves intrinsic characteristics of surfaces. Results on real-world 3D CAD models demonstrate the effectiveness of our approach: the datasets are reduced by up to 83.93% in size. Our approach achieves better results than previous work that does not rely on supervised learning. The proposed method is applicable to any kind of 3D scene and geometry.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call