Abstract

3D scanning of artifacts is an important tool for studying and preservation of a culture heritage. Systems for 3D reconstruction are constantly developing but due to the shape and size of artifacts it is usually necessary to perform 3D scanning from several different positions in space. This brings up the problem of 3D registration which is a process of aligning different point clouds. Software-based 3D registration methods typically require identifying the sufficient number of point correspondence pairs between different point clouds. These correspondences are frequently found manually and/or by introducing a specially designed objects in the scene. On the other hand, in this work we explore whether OpenPose, a well-known deep learning model, can be used to find corresponded point pairs between different views and eventually assure a successful 3D registration. OpenPose is trained to find patterns and keypoints on images containing people. We acknowledge that many artifacts are indeed human like postures and we test our ideas on finding correspondences using OpenPose. Furthermore, if an artifact is nothing like human like appearance, we demonstrate a method introducing in 3D scene a simple human like image, and in turn allowing OpenPose to facilitate 3D registration between 3D scans from different views. The proposed 3D registration pipeline is easily applicable to many existing 3D scanning solutions of artifacts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call