Abstract

Despite the rapid development of crowd-sourced photogrammetric reconstruction, a huge database of 3D models is still given in arbitrary scale, position, and orientation. To alleviate the issue, model alignment methods have been proposed to identify good viewpoints for 3D models. This work introduces a symmetry-aware alignment method for photogrammetric 3D models. Our key idea is to associate the best orientation with global symmetry and place objects in a way they commonly appear in our surroundings. Compared to the previous alignment methods that only infer upright orientation, we also determine frontal orientation. The proposed method works in four steps. First, we compute symmetry features using point global symmetry descriptors (PGSD). Second, a set of reflective symmetry planes are generated by feature aggregation. Third, we extract global symmetry and the principal axis based on a subset of symmetry planes with smaller detection errors. Finally, upright and frontal orientations are determined by the principal axis and information entropy successively. We evaluated the proposed method on synthetic and real photogrammetric datasets. Experiments show that the proposed PGSD has achieved higher efficiency and accuracy than curvature, FPFH, and deep learning-based frameworks. Our method outperforms the existing symmetry detection and upright orientation methods, where the average recall of main symmetry plane is 71.13%, and 78.50% of the models are well-aligned.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call