Abstract
Robot object grasping and handling requires accurate grasp pose estimation and gripper/end-effector design, tailored to individual objects. When object shape is unknown, cannot be estimated, or is highly complex, parallel grippers can provide insufficient grip. Compliant grippers can circumvent these issues through the use of soft or flexible materials that adapt to the shape of the object. This letter proposes a 3D printable soft gripper design for handling complex shapes. The compliant properties of the gripper enable contour conformation, yet offer tunable mechanical properties (i.e., directional stiffness). Objects that have complex shape, such as non-constant curvature, convex and/or concave shape can be grasped blind (i.e., without grasp pose estimation). The motivation behind the gripper design is handling of industrial parts, such as jet and Diesel engine components. (Dis)assembly, cleaning and inspection of such engines is a complex, manual task that can benefit from (semi-)automated robotic handling. The complex shape of each component, however, limits where and how it can be grasped. The proposed soft gripper design is tunable by compliant cell stacks that deform to the shape of the handled object. Individual compliant cells and cell stacks are characterized and a detailed experimental analysis of more than 600 grasps with seven different industrial parts evaluates the approach.
Highlights
R OBOTIC object grasping and manipulation are commonplace in industrial manufacturing
The soft gripper design is evaluated by the characterization of individual stacked cells, benchmarking of the soft gripper by comparison to the original gripper pads and a solid version, as well as by numerous (>600) grasps with seven different industrial parts and tools
This work proposes a novel soft gripper design based on 3D pinted compliant cells
Summary
R OBOTIC object grasping and manipulation are commonplace in industrial manufacturing. Ongoing research efforts towards (bin) picking disregard feeders and utilize sensing to detect objects and their grasp pose for handling [3], for example by deep convolutional neural networks (CNNs) trained on large datasets of grasp attempts in a simulator or on a physical robot [4], [5]. Such approaches assume the gripper design suits. This letter was recommended for publication by Associate Editor M. Calisti and Editor Kyu-Jin Cho upon evaluation of the reviewers’ comments. (Corresponding author: Metodi Netzev.)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.