Accuracy assessment of building point clouds automatically generated from iphone images

Article, Other literature type English OPEN
Sirmacek, B. ; Lindenbergh, R. (2014)
  • Publisher: Copernicus Publications
  • Journal: (issn: 2194-9034, eissn: 2194-9034)
  • Related identifiers: doi: 10.5194/isprsarchives-XL-5-547-2014
  • Subject: TA1-2040 | T | TA1501-1820 | Applied optics. Photonics | Engineering (General). Civil engineering (General) | Technology

Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (&sigma;) of roughness histograms are calculated as (μ<sub>1</sub> = 0.44 m., &sigma;<sub>1</sub> = 0.071 m.) and (μ<sub>2</sub> = 0.025 m., &sigma;<sub>2</sub> = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.
Share - Bookmark