Abstract

One of the many challenges in advanced robotics is the autonomous exploration, recognition and manipulation of objects in cluttered unstructured workspaces. The problem is even more challenging when multiple heterogeneous robots with different tools or end-effectors are expected to perform complex collaborative missions, such as disassembly. Within this context, the aim of this work is to develop a framework enabling a robot to detect and localise objects in a workspace and share environment information with another robot, which subsequently performs a grasping operation. The motivation lies in merging point cloud data measured from multiple poses to enrich the representation of the workspace, and decomposing a part into generic primitive geometric features to allow efficient shape recognition in the semantic space. This allows easier integration with an ontological knowledge base for object searching using natural language input. To identify primitive geometrical characteristics and infer object types, this paper introduces a simple but efficient graph-based method, where the graph nodes represent elementary geometric shapes such as planes. The concept is demonstrated using two KUKA robots, where one is acting as the eye of the system, equipped with an RGB-D camera providing views from multiple angles, and the other one has a gripper for grasping, and is acting as a hand. Although the current paper uses basic components such as cubes and triangular blocks, the algorithm is interpretable, and can be extended with more complex shapes. The approach is demonstrated using wood blocks, which are employed to simulate disassembly in unstructured environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call