Abstract

Due to object recognition accuracy limitations, unmanned ground vehicles (UGVs) must perceive their environments for local path planning and object avoidance. To gather high-precision information about the UGV’s surroundings, Light Detection and Ranging (LiDAR) is frequently used to collect large-scale point clouds. However, the complex spatial features of these clouds, such as being unstructured, diffuse, and disordered, make it difficult to segment and recognize individual objects. This paper therefore develops an object feature extraction and classification system that uses LiDAR point clouds to classify 3D objects in urban environments. After eliminating the ground points via a height threshold method, this describes the 3D objects in terms of their geometrical features, namely their volume, density, and eigenvalues. A back-propagation neural network (BPNN) model is trained (over the course of many iterations) to use these extracted features to classify objects into five types. During the training period, the parameters in each layer of the BPNN model are continually changed and modified via back-propagation using a non-linear sigmoid function. In the system, the object segmentation process supports obstacle detection for autonomous driving, and the object recognition method provides an environment perception function for terrain modeling. Our experimental results indicate that the object recognition accuracy achieve 91.5% in outdoor environment.

Highlights

  • Autonomous driving technologies enable motor vehicles to drive themselves safely and reliably, and are being widely researched for smart cities and urban services [1]

  • The ability to perceive their surroundings is essential for unmanned ground vehicles (UGVs) to achieve autonomous driving [2]

  • Autonomous UGVs need to obtain a large amount of accurate environmental data to support automatic object avoidance and local path planning [3]

Read more

Summary

Introduction

Autonomous driving technologies enable motor vehicles to drive themselves safely and reliably, and are being widely researched for smart cities and urban services [1]. The ability to perceive their surroundings is essential for unmanned ground vehicles (UGVs) to achieve autonomous driving [2]. Autonomous UGVs need to obtain a large amount of accurate environmental data to support automatic object avoidance and local path planning [3]. Several types of environment sensors, such as fisheye, binocular, and depth cameras, are widely used to obtain real-time information about a vehicle’s surroundings so it can be aware of its environment [4,5,6]. The clearest advantage of Light Detection and Ranging (LiDAR) is that it can rapidly collect high-precision, wide-range point clouds [7]. Uniformity density is a difficult bottleneck for computer to allocate memory in point cloud storage [10]. To analyze the types of obstacles found in outdoor scenes, highly efficient pre-process of point cloud is urgent before executing classification and recognition steps [11]

Objectives
Findings
Methods
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call