Abstract

Occupancy grid map is a popular tool for representing the surrounding environments of mobile robots/intelligent vehicles. Its applications can be dated back to the 1980s, when researchers utilized sonar or LiDAR to illustrate environments by occupancy grids. However, in the literature, research on vision-based occupancy grid mapping is scant. Furthermore, when moving in a real dynamic world, traditional occupancy grid mapping is required not only with the ability to detect occupied areas, but also with the capability to understand dynamic environments. The paper addresses this issue by presenting a stereo-vision-based framework to create a dynamic occupancy grid map, which is applied in an intelligent vehicle driving in an urban scenario. Besides representing the surroundings as occupancy grids, dynamic occupancy grid mapping could provide the motion information of the grids. The proposed framework consists of two components. The first is motion estimation for the moving vehicle itself and independent moving objects. The second is dynamic occupancy grid mapping, which is based on the estimated motion information and the dense disparity map. The main benefit of the proposed framework is the ability of mapping occupied areas and moving objects at the same time. This is very practical in real applications. The proposed method is evaluated using real data acquired by our intelligent vehicle platform “SeTCar” in urban environments.

Highlights

  • In the field of intelligent vehicles, many tasks, such as localization, collision avoidance and path planning, are usually performed based on well-represented maps [1,2]

  • This paper proposes a framework of stereo-vision-based dynamic occupancy grid mapping in urban environments

  • We present a framework of a dynamic occupancy grid mapping technique

Read more

Summary

Introduction

In the field of intelligent vehicles, many tasks, such as localization, collision avoidance and path planning, are usually performed based on well-represented maps [1,2]. This paper proposes a framework of stereo-vision-based dynamic occupancy grid mapping in urban environments. The proposed framework mainly comprises two components (motion analysis for the vehicle itself and independent moving objects and dynamic occupancy grid mapping) within two parallel processes (sparse feature points processing between two consecutive stereo image pairs and dense stereo processing). A novel independent moving object segmentation method based on a U-disparity map. Based on previous work in [6], we propose a dynamic occupancy grid mapping method with consideration for the pitch angle between the stereo-vision system and the ground plane.

Motion Analysis from a Moving Vision System
Vision-Based Occupancy Grid Mapping
Platform and Sensor Model
Definition of Dynamic Occupancy Grid Map
Creating Disparity Map and U-V Disparity Maps
Motion Analysis from Moving Stereo Vision Platform
Feature Points Detection
Establishing Point-to-Point Correspondence
Motion Calculation
Independent Moving Objects Segmentation in the U-Disparity Map
Building Dynamic Occupancy Grid Map
Preprocessing
Occupancy and Motion Indicator
Implementation
Evaluating Sparse Feature Point-Based Motion Estimation
Experiments in Feature Detectors
Experiments in Establishing Point-to-Point Correspondences
Experiments in Dynamic Occupancy Grid Mapping
Findings
Conclusions and Future Works
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call