Abstract

Humanoid robots that have to operate in cluttered and unstructured environments, such as man-made and natural disaster scenarios, require sophisticated sensorimotor capabilities. A crucial prerequisite for the successful execution of whole-body locomotion and manipulation tasks in such environments is the perception of the environment and the extraction of associated environmental affordances, i.e., the action possibilities of the robot in the environment. We believe that such a coupling between perception and action could be a key to substantially increase the flexibility of humanoid robots. In this paper, we approach the affordance-based generation of whole-body actions for stable locomotion and manipulation. We incorporate a rule-based system to assign affordance hypotheses to visually perceived environmental primitives in the scene. These hypotheses are then filtered using extended reachability maps that carry stability information, for identifying reachable affordance hypotheses. We then formulate the hypotheses in terms of a constrained inverse kinematics problem in order to find whole-body configurations that utilize a chosen set of hypotheses. The proposed methods are implemented and tested in simulated environments based on RGB-D scans as well as on a real robotic platform.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.