Abstract

Environment perception is critical for feasible path planning and safe driving for autonomous vehicles. Perception devices, such as camera, LiDAR (Light Detection and Ranging), IMU (Inertial Measurement Unit), etc., only provide raw sensing data with no identification of vital objects, which is insufficient for autonomous vehicles to perform safe and efficient self-driving operations. This study proposes an improved edge-oriented segmentation-based method to detect the objects from the sensed three-dimensional (3D) point cloud. The improved edge-oriented segmentation-based method consists of three main steps: First, the bounding areas of objects are identified by edge detection and stixel estimation in corresponding two-dimensional (2D) images taken by a stereo camera. Second, 3D sparse point clouds of objects are reconstructed in bounding areas. Finally, the dense point clouds of objects are segmented by matching the 3D sparse point clouds of objects with the whole scene point cloud. After comparison with the existing methods of segmentation, the experimental results demonstrate that the proposed edge-oriented segmentation method improves the precision of 3D point cloud segmentation, and that the objects can be segmented accurately. Meanwhile, the visualization of output data in advanced driving assistance systems (ADAS) can be greatly facilitated due to the decrease in computational time and the decrease in the number of points in the object’s point cloud.

Highlights

  • To securely and efficiently drive in increasingly complex traffic, the drivers must have a distinct and correct understanding of the environment

  • Many vehicle manufacturers focus on developing the advanced driving assistance system (ADAS) to assist drivers with decision making [3]

  • Compared with the Lidar point cloud, the partitioned point cloud has a satisfied distance accuracy, which is important for autonomous vehicles, and the partitioned point cloud contains the information of color and semantics, which is helpful for object tracking

Read more

Summary

Introduction

To securely and efficiently drive in increasingly complex traffic, the drivers must have a distinct and correct understanding of the environment. Complex architecture (e.g., the flyovers, switchback, abrupt slope), moving pedestrians and vehicles in an urban area, dim illumination in underground parking lots, feeble GPS signal, and inaccurate positioning increases the difficulty of driving [1,2]. Such challenges exist for human drivers, and for autonomous vehicles whose safety heavily relies on knowledge of the surrounding environment. To address these issues, many vehicle manufacturers focus on developing the advanced driving assistance system (ADAS) to assist drivers with decision making [3].

Objectives
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call