Abstract

Grasping objects in a large-scale area has been investigated extensively for mobile robots. But for unmanned aerial manipulator(UAM), this is still a challenging task. In order to solve the problem of accurate grasping in a large-scale area, we propose a novel detection and control framework for UAM, which consists of two stages: preliminary localization and precise localization. At the preliminary localization stage, the RGB-D sensor mounted on the end-effector of the manipulator scans to obtain the point cloud surrounding the UAM. By bringing the known target shape and processed point cloud parameters into our proposed loss function, we select the highest priority area as the best potential region proposal, which can help the UAM to screen out the target for precise localization from the obtained point cloud. At precise localization stage, after UAM reaches the best potential region, the RGB-D sensor mounted on the drone uses the deep object pose estimation(DOPE) to estimate the 6D pose of the target. Through independently compensating for the disturbance of the UAV and manipulator, the UAM can accurately grasp the target with the estimated 6D pose. To evaluate the performance of the UAM, we conducted experiments in four different scenes. Experimental results demonstrate that the UAM can grasp the target with an average success rate of 83.4% for the large-scale scene. The above results prove the feasibility and robustness of the framework. The code is available at https://github.com/skywoodsz/CatchIt.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call