Abstract

We propose a novel framework for accurate 3D georegistration of wide area motion imagery (WAMI), which is a challenging problem because parametric transformations are insufficient for aligning WAMI image frames to a georeferenced coordinate system in urban areas containing tall buildings and 3D structures. Using structure from motion (SfM) we estimate a 3D point cloud for the scene. Independently, we also compute a precise alignment between the roads in the WAMI frames and a georeferenced vector roadmap by detecting locations of moving vehicles and aligning these locations with the roads in the vector roadmap via parametric chamfer matching. The aligned vector roadmap then identifies corresponding pixels in the WAMI frames, which can be triangulated using the SfM camera parameters to obtain a set of sparse but georeferenced points in the SfM 3D coordinate frame that directly enable georegistration of the complete 3D scene point cloud via a similarity transform. The proposed methodology enables 3D georegistration of a sequence of WAMI frames using only georeferenced vector roadmaps, which are readily available, and without requiring independent georeferenced lidar scans that have been used in prior work. Our framework is validated on WAMI dataset including high resolution WAMI frames for the downtown Rochester, NY region. Experimental results demonstrate that the proposed framework produces an accurate georeferenced point cloud representation for the scene.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call