Abstract

A 3D reconstruction method based on dynamic graph convolutional occupancy networks is proposed to address the issues of texture information loss, geometric information loss after voxelization, and lack of object completeness constraints in the process of 3D reconstruction using voxel representation in a block-wise manner. By constructing a dynamic graph structure for feature extraction, the method aims at restore 3D models with fewer holes and local details. In the feature extraction stage, local pooling is employed within each point cloud block to address the problem of nonsignificant texture feature loss. To tackle the issues of geometric constraint loss and insufficient scene semantic information caused by block-wise processing, a feature fusion method between adjacent blocks is proposed to learn richer scene semantic information and long-range dependencies between points. By learning features within and between blocks, each point retains as much geometric information as possible, mitigating the problem of geometric information loss due to voxelization. During the surface generation, interpolation is used to infer the occupancy value for each point, and the Marching Cubes algorithm is employed for three-dimensional surface reconstruction. Experimental validation on object-level (ShapeNet dataset) and scene-level (Synthetic Rooms dataset, MatterPort3D dataset for real-world scenes) datasets demonstrates the effectiveness and advancement of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call