Abstract

Grasping is an essential prerequisite for autonomous robotic manipulation. The traditional approach to robotic grasping is to first estimate the pose of the object to be grasped and then to calculate a grasp configuration. However, there are two deficiencies that make it challenging to apply this approach to real world settings: (i) a model is required for each object, (ii) object pose estimation given noisy and incomplete sensor data can be extremely difficult. More recent approaches localize grasp configurations directly without prior object pose estimation. In this dissertation, we present grasp detection methods that fall within this direction. Our methods localize enveloping and two-finger grasp affordances in noisy and unstructured point clouds obtained from one or multiple depth sensors. We introduce an efficient method for sampling grasp candidates with six degrees of freedom by constraining the robot hand to be aligned with the local minor curvature axis of the object surface to be grasped. We represent grasps by images which describe the geometry of the object surface in the vicinity of the grasp. Having defined these representations, we present how to collect ground truth data in order to train machine learning models to classify grasp candidates as actual grasps. In simulation and robot experiments, we demonstrate that our methods enable highly reliable robotic grasping of novel objects with a wide variety of shapes, sizes, and materials in both isolated tabletop scenarios and dense clutter. We deploy our methods on two custom robotic systems for assisting people with motor disabilities in activities of daily living. These systems can autonomously pick up an object indicated by the user with a laser pointer.--Author's abstract

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.