Abstract

Autonomous parking in an indoor parking lot without human intervention is one of the most demanded and challenging tasks of autonomous driving systems. The key to this task is precise real-time indoor localization. However, state-of-the-art low-level visual feature-based simultaneous localization and mapping systems (VSLAM) suffer in monotonous or texture-less scenes and under poor illumination or dynamic conditions. Additionally, low-level feature-based mapping results are hard for human beings to use directly. In this paper, we propose a semantic landmark-based robust VSLAM for real-time localization of autonomous vehicles in indoor parking lots. The parking slots are extracted as meaningful landmarks and enriched with confidence levels. We then propose a robust optimization framework to solve the aliasing problem of semantic landmarks by dynamically eliminating suboptimal constraints in the pose graph and correcting erroneous parking slots associations. As a result, a semantic map of the parking lot, which can be used by both autonomous driving systems and human beings, is established automatically and robustly. We evaluated the real-time localization performance using multiple autonomous vehicles, and an repeatability of 0.3 m track tracing was achieved at a 10 kph of autonomous driving.

Highlights

  • Autonomous driving has seen considerable progress in recent years

  • We present a robust visual feature-based simultaneous localization and mapping systems (VSLAM) system based on the recognition of high-level landmarks for parking, i.e., parking slots

  • We design and implement a low-cost and robust visual-based simultaneous localization and mapping (SLAM) system using a typical visual landmark of parking slots with aid from a limited number of visual fiducial tags, which is immune to monotonous texture, varying illumination and dynamic conditions; We propose a robust SLAM back-end approach to associate parking slots considering the confidence level of the landmarks; We analyse the effectiveness and arrangement strategy of visual fiducial tags in a typical indoor parking lot

Read more

Summary

Introduction

Autonomous driving has seen considerable progress in recent years. Researchers have made breakthroughs in several challenging fields, including obstacle detection, real-time motion planning and high-precision localization (many based on differential global navigation satellite system (GNSS)). Deep learning-based methods have shown their capability for accurate and robust detection of such meaningful objects [12] Inspired by these methods, we present a robust VSLAM system based on the recognition of high-level landmarks for parking, i.e., parking slots. To support localization in slot-lack area such as passways, we introduce visual fiducial tags detected from the front view camera for improving the overall accuracy and robustness. Their numbers and configurations are further analyzed.

Visual SLAM
Semantic Landmark-Based SLAM
Robust SLAM
Approach
Learning-Based Parking Slot Detection
CNN-Based Slot ID Recognition
Visual Fiducial Tags
Semantic-Based Robust SLAM
Front-End
Back-End
Experimental Analysis
Mapping with Semantic Landmarks
Parking Slot-Only Mapping
Tag-Aided Parking Slot Mapping
Online Localization Performance
How Many Visual Fiducial Tags Are Needed?
Observation Frequency-Based Analysis of Tags
Position-Based Analysis of Tags
Explanation Based on Graph Configuration
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call