Abstract

For an unknown environment, how to make a mobile robot identify a target object and locate it autonomously, this is a very challenging question. In this paper, a novel multi-sensor fusion method based on a camera and a laser range finder (LRF) for mobile manipulations is proposed. Although a camera can acquire large quantities of information, it does not directly get the 3D data of the environment. Moreover, the camera image processing is complex and easily influenced from the change in ambient light. In view of the ability of the LRF to directly get the 3D coordinates of the environment and its stability against outside influence, and the superiority of the camera to acquire rich color information, the combination of the two sensors by making use of their advantages is employed to obtain more accurate measurement as well as to simplify information processing. To overlay the camera image with the measurement point cloud of the pitching LRF and to reconstruct the 3D image which includes pixel depth information, the homogeneous transformation model of the system is built. Then, based on the combination of the color features from the camera image and the shape features from the LRF measurement data, the autonomous identification and location of target object are achieved. In order to extract the shape features of the object, a two-step method is introduced, and a sliced point cloud algorithm is proposed for the preliminary classification of the measurement data of the LRF . The effectiveness of the proposed method is validated by the experimental testing and analysis carried out on the mobile manipulator platform. The experimental results show that by this method, the robot can not only identify target object autonomously, but also determine whether it can be operated, and acquire a proper grasping location.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call