Abstract

This paper presents a novel approach to accessibility analysis for manipulative robotic tasks. The workspace is captured using a stereo camera, and heterogeneously modeled with the recognized plane features, recognized objects with complete solid models, and unrecognized 3D point clouds organized with a multi-resolution octree. When the service robot is requested to manipulate a recognized object, the local accessibility information for the object is retrieved from the object database. Then, accessibility analysis is done to verify the object accessibility and determine the global accessibility. The verification process utilizes the visibility query, which is accelerated by graphics hardware. The experimental results show the feasibility of real-time and behavior-oriented 3D modeling of workspace for robotic manipulative tasks, and also the performance gain of the hardware-accelerated accessibility analysis obtained using the commodity graphics card.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.