Abstract

Abstract. Since Global Navigation Satellite System may be unavailable in complex dynamic environments, visual SLAM systems have gained importance in robotics and its applications in recent years. The SLAM system based on point feature tracking shows strong robustness in many scenarios. Nevertheless, point features over images might be limited in quantity or not well distributed in low-textured scenes, which makes the behaviour of these approaches deteriorate. Compared with point features, line features as higher-dimensional features can provide more environmental information in complex scenes. As a matter of fact, line segments are usually sufficient in any human-made environment, which suggests that scene characteristics remarkably affect the performance of point-line feature based visual SLAM systems. Therefore, this paper develops a scene-assisted point-line feature based visual SLAM method for autonomous flight in unknown indoor environments. First, ORB point features and Line Segment Detector (LSD)-based line features are extracted and matched respectively to build two types of projection models. Second, in order to effectively combine point and line features, a Convolutional Neural Network (CNN)-based model is pre-trained based on the scene characteristics for weighting their associated projection errors. Finally, camera motion is estimated through non-linear minimization of the weighted projection errors between the correspondent observed features and those projected from previous frames. To evaluate the performance of the proposed method, experiments were conducted on the public EuRoc dataset. Experimental results indicate that the proposed method outperforms the conventional point-line feature based visual SLAM method in localization accuracy, especially in low-textured scenes.

Highlights

  • Unmanned Aerial Vehicles (UAVs) are becoming increasingly popular and crucial autonomous platforms for many applications ranging from hazard monitoring, search and rescue operations, emergency response, Special Weapons and Tactics (SWAT) support to intelligence, surveillance, and reconnaissance (ISR)

  • As a matter of fact, line segments are usually sufficient in any human-made environment, even in low textured scenes, while the quality and quantity of the detected point decreases in lowtexture environments, which suggests that scene characteristics remarkably affect the performance of point-line feature based visual Simultaneous Localization And Mapping (SLAM) system

  • Line segments are usually sufficient in any human-made environment, even in low textured scenes, while the quality and quantity of the detected point decreases in lowtexture environments, which suggests that scene characteristics remarkably affect the performance of point-line feature based visual SLAM system

Read more

Summary

INTRODUCTION

Unmanned Aerial Vehicles (UAVs) are becoming increasingly popular and crucial autonomous platforms for many applications ranging from hazard monitoring, search and rescue operations, emergency response, Special Weapons and Tactics (SWAT) support to intelligence, surveillance, and reconnaissance (ISR). At the same time, Klein and Murray (2007) introduced the PTAM system which is the first monocular vision SLAM system which is based on key frame BA and simultaneous tracking and mapping, making real-time V-SLAM a reality. Pumarola et al (2017) introduced real-time monocular visual SLAM, which combines point and line features for localization and mapping. Wang et al (2018) introduced the line feature angle as one of the parameters of the re-projection error, and designed the PL-SLAM method to adjust the weight ratio of the point-line based on the estimation of the camera state residual. We develop a scene-assisted point-line feature based visual SLAM method for autonomous flight in unknown indoor environments.

METHODOLOGY
Tracking model of point and line features
Establishment of point and line feature weighted model based on CNN network
Adaptive weighted re-projection error model
Y f y Z 2
EXPERIMENTAL VALIDATION
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call