Abstract

Orthoimage, which is geometrically equivalent to a map, is one of the important geospatial products. Displacement and occlusion in optical images are caused by perspective projection, camera tilt, and object relief. A digital surface model (DSM) is essential data for generating true orthoimages to correct displacement and to recover occlusion areas. Light detection and ranging (LiDAR) data collected from an airborne laser scanner (ALS) system is a major source of DSM. The traditional methods require sophisticated procedures to produce a true orthoimage. Most methods utilize 3D coordinates of the DSM and multiview images with overlapping areas for orthorectifying displacement and detecting and recovering occlusion areas. LiDAR point cloud data provides not only 3D coordinates but also intensity information reflected from object surfaces in the georeferenced orthoprojected space. This paper proposes true orthoimage generation based on a generative adversarial network (GAN) deep learning (DL) with the Pix2Pix model using intensity and DSM of the LiDAR data. The major advantage of using LiDAR data is that the data is occlusion‐free true orthoimage in terms of projection geometry except in the case of low image quality. Intensive experiments were performed using the benchmark datasets provided by the International Society for Photogrammetry and Remote Sensing (ISPRS). The results demonstrate that the proposed approach could have the capability of efficiently generating true orthoimages directly from LiDAR data. However, it is crucial to find appropriate preprocessing to improve the quality of the intensity of the LiDAR data to produce a higher quality of the true orthoimages.

Highlights

  • True orthoimages are vertical views of the Earth’s surface, eliminating distortion of the objects and allowing a view of nearly any point on the ground with a uniform scale

  • The performance was evaluated with plots of the loss for epoch and the Fréchet inception distance (FID) and structural similarity index measure (SSIM) that have been frequently used as evaluation measures of the generative adversarial network (GAN)-based models [33,34,35,36]

  • Since IR is beyond the visible spectrum, color infrared (CIR) images are displayed as the false color composite

Read more

Summary

Introduction

True orthoimages are vertical views of the Earth’s surface, eliminating distortion of the objects and allowing a view of nearly any point on the ground with a uniform scale. The true orthoimages are geometrically equivalent to topographic maps that show true geographic locations of the terrain features. The geometric distortions are caused by the relief displacement of the terrain features (i.e., height variation of the terrain and object surfaces) and perspective projection of the optical cameras that result in the occlusion areas. Recovery or compensation of the occlusion areas is crucial in true orthoimage generation [1]. Most approaches have focused on detection and recovery of the occlusion areas. Traditional photogrammetric methods require aerial triangulation to obtain exterior orientation parameters (i.e., exposure location and rotation angles) of each aerial image and precise 3D object model data such as the digital building model (DBM) with digital terrain model (DTM) to remove geometric distortion and to detect occlusion areas

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call