Abstract

Robust vision-based grasping and manipulation of unknown objects in unstructured scenes requires the extraction of action candidates based on visual information while taking into account noise and occlusions in such scenes. We address this problem by combining the concept of affordances and <i>Bayesian Recursive State Estimation</i>. We propose to extract affordances using heuristics on the averaged local surface information of supervoxels in a point cloud. Based on a local, geometry-aware coordinate frame, we define a uniform state for different affordances. Using Bayesian statistics, this state is fused across multiple observations of the scene to improve the estimates for the pose and existence certainty of actions. This facilitates the extraction of robust grasping and manipulation actions independent of the segmentation of a scene. The proposed approach is evaluated in grasping experiments with more than 900 grasp executions using the humanoid robot ARMAR-6 in an unstructured scene with a variable number of unknown objects. The experimental results show that the grasping success rate is improved by over 10&#x0025; compared to a state-of-the-art approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.