Abstract

Abstract. Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

Highlights

  • Point clouds are developing towards a standard product in urban management

  • Evaluation of the quality of iPhone generated point clouds is done by comparing them with Terrestrial Laser Scanner (TLS) point clouds

  • The iPhone point cloud is generated from iPhone 3GS smartphone sensor data

Read more

Summary

INTRODUCTION

Point clouds are developing towards a standard product in urban management. Still, outdoor point cloud acquisition with active sensors is a relatively expensive and involved process. We discuss how to generate a point cloud from multi-view iPhone images and from iPhone videos. (Wang, 2012) proposed a semi-automatic algorithm to reconstruct 3D building models by using images taken from smart phones with GPS and g-sensor (accelerometer) information. They used multi-view smart phone images with 3D position and G-sensor information to reconstruct building facades. Heidori et al (Heidari and Alaei-Novin, 2013) proposed an object tracking method using the iPhone 4 camera sensor These studies show the usability of iPhone images for feature extraction and matching purposes which is one of the important steps of 3D depth measurement from multi-view images. We discuss the accuracy of the point clouds which are generated using an iPhone sensor by using TLS point clouds as reference

IPHONE AND TLS POINT CLOUDS
TLS Point Cloud
COMPARING THE POINT CLOUDS
ACCURACY TEST ON THE SHOWCASES
FUTURE WORK

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.