Abstract

Three-dimensional (3D) urban models have gained interest because of their applications in many use-cases such as disaster management, energy management and solar potential analysis. However, generating these 3D representations requires LiDAR data, which is usually expensive to collect. Because it is expensive, the lidar data are not frequently updated and are not widely available for many regions in the US. As such, 3D models based on these lidar data are either outdated or limited to those locations where the data is available. In contrast, satellite images are freely available and frequently updated. To take advantage of this availability, we propose sat2pc, a deep learning-based approach that predicts the point cloud of a building roof from a single 2D satellite image. Our technique integrates two different loss functions, namely Chamfer Distance and Earth Mover's Distance loss, resulting in a 3D output that balances the overall structure and detail. We extensively evaluate our model and perform ablation studies on a building roof dataset. Our results show that sat2pc outperforms the existing baselines by at least 18.6%. Moreover, we show that our refinement module improves the overall performance, resulting in fine-grained 3D output. Finally, we show that the predicted point cloud captures more detail and geometric characteristics than other baselines.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.