Abstract
Visual localization, i.e., the camera pose localization within a known three-dimensional (3D) model, is a basic component for numerous applications such as autonomous driving cars and augmented reality systems. The most widely used methods from the literature are based on local feature matching between a query image that needs to be localized and database images with known camera poses and local features. However, this method still struggles with different illumination conditions and seasonal changes. Additionally, the scene is normally presented by a sparse structure-from-motion point cloud that has corresponding local features to match. This scene representation depends heavily on different local feature types, and changing the different local feature types requires an expensive feature-matching step to generate the 3D model. Moreover, the state-of-the-art matching strategies are too resource intensive for some real-time applications. Therefore, in this paper, we introduce a novel framework called deep-learning accelerated visual localization (DLALoc) based on mesh representation. In detail, we employ a dense 3D model, i.e., mesh, to represent a scene that can provide more robust 2D-3D matches than 3D point clouds and database images. We can obtain their corresponding 3D points from the depth map rendered from the mesh. Under this scene representation, we use a pretrained multilayer perceptron combined with homotopy continuation to calculate the relative pose of the query and database images. We also use the scale consistency of 2D-3D matches to perform the efficient random sample consensus to find the best 2D inlier set for the subsequential perspective-n-point localization step. Furthermore, we evaluate the proposed visual localization pipeline experimentally on Aachen DayNight v1.1 and RobotCar Seasons datasets. The results show that the proposed approach can achieve state-of-the-art accuracy and shorten the localization time about five times.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.