Abstract

The availability of light detection and ranging data (LiDAR) has resulted in a new era of landscape analysis. For example, improvements in LiDAR data resolution may make it possible to accurately model microtopography over a large geographic area; however, data resolution and processing costs versus resulting accuracy may be too costly. We examined two LiDAR datasets of differing resolutions, a low point density (0.714 points/m2spacing) 1 m DEM available statewide in Pennsylvania and a high point density (10.28 points/m2spacing) 1 m DEM research-grade DEM, and compared the calculated roughness between both resulting DEMs using standard deviation of slope, standard deviation of curvature, a pit fill index, and the difference between a smoothed splined surface and the original DEM. These results were then compared to field-surveyed plots and transects of microterrain. Using both datasets, patterns of roughness were identified, which were associated with different landforms derived from hydrogeomorphic features such as stream channels, gullies, and depressions. Lowland areas tended to have the highest roughness values for all methods, with other areas showing distinctive patterns of roughness values across metrics. However, our results suggest that the high-resolution research-grade LiDAR did not improve roughness modeling in comparison to the coarser statewide LiDAR. We conclude that resolution and initial point density may not be as important as the algorithm and methodology used to generate a LiDAR-derived DEM for roughness modeling purposes.

Highlights

  • Over the past several decades, geomorphologists, soil scientists, ecologists, foresters, and hydrologists have increasingly utilized terrain data for landscape classification [1,2,3,4], predicting forest communities [5], predicting soil properties [6,7,8,9], and understanding riparian zones and their stream networks [10]

  • Due to improvements in data acquisition, computing power, and storage capacity, terrain data has become increasingly available at finer and finer resolutions and at broader scales, from National Elevation Dataset (NED) and Shuttle Radar Topography Mission (SRTM) to light detection and ranging data (LiDAR)

  • The Critical Zone Observatory (CZO) LiDAR root mean square difference (RMSD) was slightly larger than the RMSD for the Pennsylvania base map (PAMAP) LiDAR with values of 0.417 m and 0.410 m, respectively (Table 1)

Read more

Summary

Introduction

Over the past several decades, geomorphologists, soil scientists, ecologists, foresters, and hydrologists have increasingly utilized terrain data for landscape classification [1,2,3,4], predicting forest communities [5], predicting soil properties [6,7,8,9], and understanding riparian zones and their stream networks [10]. LiDAR-derived DEMs have been shown to be extremely accurate when compared to non-LiDAR generated DEMs [11], the accuracy of LiDAR-derived DEMs for measuring landscape microtopography is debated [12]. This can be due to data interpretation difficulties arising from abiotic (such as slope complexity) and biotic terrain factors (such as evergreen vegetation and coarse woody debris) [13, 14]. Some researchers have found LiDAR-derived DEMs to be oversmoothed [12], which can minimize surface roughness and result in less topographic complexity. Others have found LiDARderived DEMs effective at identifying features such as landslides, which can have complex roughness patterns [15]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call