Abstract

Abstract. The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case studies have been conducted using a variety of point densities, terrain types and building densities. The results have been encouraging. More work is required for better processing of, for example, forested areas, buildings with sides that are not at right angles or are not straight, and single trees that impinge on buildings. Further work may also be required to ensure that the buildings extracted are of fully cartographic quality. A first version will be included in production software later in 2011. In addition to the standard geospatial applications and the UAV navigation, the results have a further advantage: since LiDAR data tends to be accurately georeferenced, the building models extracted can be used to refine image metadata whenever the same buildings appear in imagery for which the GPS/IMU values are poorer than those for the LiDAR.

Highlights

  • In the past few decades, attempts to develop a system that can automatically recognize and extract 3-D objects from imagery have not been successful

  • Modern stereo image matching algorithms and LIDAR provide very dense, accurate point clouds, which can be used for automatic extraction of 3-D objects (Zhang and Smith, 2010)

  • The terrain shaded relief (TSR) makes manifest 3-D objects in a point cloud. In this case the point cloud was photogrammetrically derived from stereo imagery by means of NGATE software for extracting elevation automatically by matching multiple overlapping images (Zhang and Walter, 2009), but the algorithms in this paper are applicable to point clouds whether they come from LIDAR or

Read more

Summary

INTRODUCTION

In the past few decades, attempts to develop a system that can automatically recognize and extract 3-D objects (buildings, houses, single trees, etc.) from imagery have not been successful. The terrain shaded relief (TSR) makes manifest 3-D objects in a point cloud In this case the point cloud was photogrammetrically derived from stereo imagery by means of NGATE software for extracting elevation automatically by matching multiple overlapping images (Zhang and Walter, 2009), but the algorithms in this paper are applicable to point clouds whether they come from LIDAR or Figure 1. In this case the point cloud was photogrammetrically produced from stereo imagery by means of NGATE software for extracting elevation automatically by matching multiple overlapping images. The same algorithms could be used to extract and identify other types of 3-D objects such as vehicles, airplanes and people

Automatic transformation from a point cloud to a bare-earth model
Identifying and grouping 3-D object points into regions
Separating buildings and houses from trees
Differentiating single trees from buildings and houses
Regularizing and simplifying boundary polygons
High-resolution LIDAR
Pennsylvania State LIDAR project
Campus of the University of Southern California
ONGOING WORK
Findings
SUMMARY
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call