Abstract

Three-dimensional (3D) point clouds are important for many applications, including object tracking and 3D scene reconstruction. Point clouds are usually obtained from laser scanners, but their high cost impedes the widespread adoption of this technology. We propose a method to generate the 3D point cloud corresponding to a single red–green–blue (RGB) image. The method retrieves high-quality 3D data from two-dimensional (2D) images captured by conventional cameras, which are generally less expensive. The proposed method comprises two stages. First, a generative adversarial network generates a depth image estimation from a single RGB image. Then, the 3D point cloud is calculated from the depth image. The estimation relies on the parameters of the depth camera employed to generate the training data. The experimental results verify that the proposed method provides high-quality 3D point clouds from single 2D images. Moreover, the method does not require a PC with outstanding computational resources, further reducing implementation costs, as only a moderate-capacity graphics processing unit can efficiently handle the calculations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call