Abstract

Delays in delivering spatial information to support the decisions about maintenance of bridges introduce stoppages into the bridge management workflow. Previous studies show that spatial data collection and analysis are major components of these inspection programs. However, the time consuming spatial data processing cause days or weeks of delays in delivering the needed spatial information. Federal agencies, in many cases, either wait for the tedious spatial data processing, or use partial data processing results containing high uncertainties. Using the processing of 3D laser scanning point clouds as an example, this paper examines the technical feasibility and scientific challenges of quantifying the data processing time and the quality (e.g., accuracy) of derived information (e.g., minimum under-clearance of a bridge) useful for bridge management. Different decisions need different geometric attributes and relationships with various accuracy requirements. The optimal data processing strategy varies with the needed information quality and time limits. It is difficult to manually analyze the time-quality trade-offs of spatial data processing due to its ad-hoc nature and large number of possible parameter settings in data processing workflows. The authors proposed a computational framework that automatically records data processing histories of engineers along with the engineering needs. Analyzing those histories lead to insights into the trade-offs between required computational complexity/time and quality of delivered spatial information. This paper presents time-quality analysis results from two bridge inspection cases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call