The rapid proliferation of portable, ground-based light detection and ranging (LiDAR) instruments suggests the need for additional quantitative tools complementary to the commonly invoked digital terrain model (DTM). One such metric is surface roughness, which is a measure of local-scale topographic variability and has been shown to be effective for mapping discrete morphometric features, i.e., fractures in outcrop, landslide scarps, and alluvial fan deposits, to name a few. Several surface roughness models have been proposed, the most common of which is based on the standard deviation of point distances from a reference datum, e.g., DTM panels or best-fit planes. In the present work, we evaluate the accuracy of these types of surface roughness models experimentally by constructing a surface of known roughness, acquiring terrestrial LiDAR scans of the surface at 25 dual-axis rotations, and comparing surface roughness estimates for each rotation calculated by three surface roughness models. Results indicate that a recently proposed surface roughness model based on orthogonal distance regression (ODR) planes and orthogonal point-to-plane distance measurements is generally preferred on the basis of minimum error surface roughness estimates. In addition, the effects of terrestrial LiDAR sampling errors are discussed with respect to this ODR-based surface roughness model, and several practical suggestions are made for minimizing these effects. These include (1) positioning the laser scanner at the largest reasonable distance from the scanned surface, (2) maintaining half-angles for individual scans at less than 22.5°, and (3) minimizing occlusion (shadowing) errors by using multiple, merged scans with the least possible overlap.
Read full abstract