Abstract

This paper presents a knowledge model of everyday objects for semantic grasp. This model is intended for extracting the grasp areas of everyday objects and approach directions for grasping when the 3D point cloud data and the intended purpose are given. Parts that make up everyday objects have functions related to their manipulation. We therefore represent everyday objects in terms of connected parts of functional units. This knowledge model describes the structure of everyday objects and information on their manipulation. The structure of an everyday object describes component parts of the object in terms of simple shape primitives to provide geometrical information and describes connections between parts with kinematic attributes. The information on the structure is used to map the manipulation knowledge onto the 3D point cloud data. The manipulation knowledge of the object includes the grasp areas and approach directions for the intended purpose. Fine grasps suitable for the intended task can be generated by performing a grasp planning with consideration for stable grasp and the kinematics of the robot in the grasp areas and approach directions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.