Abstract

This paper describes an object detection and classification method for an Unmanned Ground Vehicle (UGV) using a range sensor and an image sensor. The range sensor and the image sensor are a 3D Light Detection And Ranging (LIDAR) sensor and a monocular camera, respectively. For safe driving of the UGV, pedestrians and cars should be detected on their moving routes of the vehicle. An object detection and classification techniques based on only a camera has an inherent problem. On the view point of detection with a camera, a certain algorithm should extract features and compare them with full input image data. The input image has a lot of information as object and environment. It is hard to make a decision of the classification. The image should have only one reliable object information to solve the problem. In this paper, we introduce a developed 3D LIDAR sensor and apply a fusion method both 3D LIDAR data and camera data. We describe a 3D LIDAR sensor which is developed by LG Innotek Consortium in Korea, named KIDAR-B25. The 3D LIDAR sensor detects objects, determines the object's Region of Interest (ROI) based on 3D information and sends it into a camera region for classification. In the 3D LIDAR domain, we recognize breakpoints using Kalman filter and then make a cluster using a line segment method to determine an object's ROI. In the image domain, we extract the object's feature data from the ROI region using a Haar-like feature method. Finally it is classified as a pedestrian or car using a trained database with an Adaboost algorithm. To verify our system, we make an experiment on the performance of our system which is mounted on a ground vehicle, through field tests in an urban area.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call