Abstract
Obtaining the position and orientation information of camera or sensor is a key task in many fields such as robot navigation, autonomous driving, DSM (digital surface model) reconstruction, etc. The pose can be recovered by matching a 2D image and a corresponding digital surface model/point cloud model of the scene. A 3D point cloud model of very high spatial accuracy can be created with a combination of stereophotogrammetry and big data processing. So far, the most accurate 3D point cloud model created with satellites imagery can reach the accuracy of 3m@SE90 (3 meters error with SE90, which is the abbreviation for Spherical Error 90%). In this paper, a novel method of estimating the pose of spaceborne cameras based on the fusion of high-resolution point cloud models and remote sensing images. The core of our method is to project a high-precision 3D point cloud model into the image space of a virtual camera, then the 3D-2D pose estimation method is transformed into a 2D-2D registration method. The registration results between two images are used to estimate the camera pose parameters. Simulation experiments were carried out to evaluate the performance of our method. The results showed that acceptable accuracy of camera pose can be achieved by using the proposed approach.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.