Abstract
3D lines in structured environments encode particular regularity like parallelism and orthogonality. We leverage this structural regularity to estimate the absolute and relative camera poses. We decouple the rotation and translation, and propose a novel rotation estimation method. We decompose the absolute and relative rotations and reformulate the problem as computing the rotation from the Manhattan frame to the camera frame. To compute this rotation, we propose an accurate and efficient two-step method. We first estimate its two degrees of freedom (DOF) by two image lines, and then estimate its third DOF by another image line. For these lines, we assume their associated 3D lines are mutually orthogonal, or two 3D lines are parallel to each other and orthogonal to the third. Thanks to our two-step DOF estimation, our absolute and relative pose estimation methods are accurate and efficient. Moreover, our relative pose estimation method relies on weaker assumptions or less correspondences than existing approaches. We also propose a novel strategy to reject outliers and identify dominant directions of the scene. We integrate it into our pose estimation methods, and show that it is more robust than RANSAC. Experiments on synthetic and real-world datasets demonstrated that our methods outperform state-of-the-art approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.