Abstract

Recent the-state-of-the-art image-based three-dimensional (3D) reconstruction methods that represent 3D shapes mainly using triangular mesh because of its memory efficiency and ability to present surface detail of objects compared to voxel and point cloud. Previous works usually follow an encoding and decoding pattern. A deep neural network to extract the features from the picture and reconstruct the 3D structure. It is a typical supervised learning process, requiring loss function to supervise the training. No existing works directly calculate the loss between the reconstruction mesh and ground truth mesh. Instead, they indirectly used the Chamfer Distance (CD) between point clouds as the loss. Most of the previous works focus on the encoding and decoding parts instead of the loss and CD is used for all works. However, when CD is applied to two point clouds with the same number of points, some points can match any number of points in another point cloud, so some points will be less involved in calculating the loss function, which will reduce the utilization of information. Therefore, We propose a new point matching strategy to calculate the loss. The point matching strategy we proposed limits the maximum number of matches for each point, allowing more points to be more involved in the loss calculation, thereby improving the information utilization rate. Experiments on single view reconstruction (SVR) and auto-encoding methods show that this new loss method can replace CD in this type of works and has better training results and 3D reconstruction quality.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.