Abstract

Three-dimensional reconstruction from a single image has excellent future prospects. The use of neural networks for three-dimensional reconstruction has achieved remarkable results. Most of the current point-cloud-based three-dimensional reconstruction networks are trained using nonreal data sets and do not have good generalizability. Based on the Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago ()data set of large-scale scenes, this article proposes a method for processing real data sets. The data set produced in this work can better train our network model and realize point cloud reconstruction based on a single picture of the real world. Finally, the constructed point cloud data correspond well to the corresponding three-dimensional shapes, and to a certain extent, the disadvantage of the uneven distribution of the point cloud data obtained by light detection and ranging scanning is overcome using the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call