Abstract

Developing an accurate onboard-camera pose estimation is one major challenge of satellite systems, and the attempt of improving remote sensing camera pose accuracy never ceases. The camera pose can be recovered by aligning a captured 2D image and a 3D digital surface model of the corresponding scene. In this paper, a novel camera pose estimation method from captured images with the over known real scene 3D products is proposed to enhance remote sensing camera attitude accuracy. The purpose of this estimation is to determine the pose of a camera purely from an image based on a known 3D model, where 3D products of very high spatial resolution are projected onto image space by virtual camera system with error-contained initial exterior orientation parameters, and whether the pose of the camera can be determined precisely depends on the 2D–3D registration result. The process consists of two steps: (1) feature extraction and (2) similarity measure and registration. Furthermore, the proposed method revises the rotation matrix and translation vector by utilizing formulation based on quaternion representation of rotation, respectively. We evaluate our method on challenging simulation data and results show that acceptable accuracy of camera pose can be achieved.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.