Abstract

Humanoid robots that have to operate in cluttered and unstructured environments, such as man-made and natural disaster scenarios, require sophisticated sensorimotor capabilities. A crucial prerequisite for the successful execution of whole-body locomotion and manipulation tasks in such environments is the perception of the environment and the extraction of associated environmental affordances, i.e. the action possibilities of the robot in the environment, in order to generate whole-body locomotion and manipulation actions. We believe that such a coupling between perception and action could be a key to substantially increase the flexibility of humanoid robots. In this paper, we present an approach for the generation of whole-body locomotion and manipulation actions based on the affordances associated with environmental elements in the scene which are extracted via multimodal exploration. Based on the properties of detected environmental primitives and the estimated empty space in the scene, we propose methods to generate hypotheses for feasible whole-body actions while taking into account additional task constraints such as manipulability and balance. We combine visual and inertial sensing modalities by means of a novel depth model for generating segmented and categorized geometric primitives. A rule-based system is then incorporated to assign affordance hypotheses to these primitives. Finally, precomputed whole-body manipulability and stability maps are used for filtering affordances that are out of reach and for identifying the most promising locations for the action execution. We tested the developed methods in different scenes, unknown to the robot, demonstrating how reasonable the generated affordance hypotheses are.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call