Abstract

In this paper, we propose a simple method to obtain an object’s 3D coordinate information in one image, using one monocular camera and two 2D LiDAR sensors, which are widely used low-cost sensors. An extrinsic calibration method is used for each LiDAR sensor to transfer LiDAR coordinate to camera pixel coordinate in one image. The proportion factor [Formula: see text] is calculated, based on the relationship of pixel from calibration points and distance from detected points of two LiDAR sensors. In order to create a correct 3D map, a deep learning algorithm for real-time object contour detection with a fully Convolutional Neural Network is proposed, through gathering data, labeling data, training model, and testing object contour detector. For 3D map building, by the proportion factor [Formula: see text] and the detector contour feature pixels, the height of the object is obtained. Combining with LiDAR sensor’s detection information, the 3D posture of the object is obtained. Based on the proposed extrinsic sensor calibration method and the object contour detect result, virtual points of the object are calculated. Finally, the experiment results show the 3D coordinate information of the detected single or multiple objects. Assembling sensors and a controller on a Kobuki robot, the indoor 3D map is built based on the proposed methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call