Integration of hardware, software and decisional components is fundamental in the design of advanced mobile robotic systems capable of performing challenging tasks in unstructured and unpredictable environments. We address such integration challenges following an iterative design strategy, centered on a decisional architecture based on the notion of motivated selection of behavior-producing modules. This architecture evolved over the years from the integration of obstacle avoidance, message reading and touch screen graphical interfaces, to localization and mapping, planning and scheduling, sound source localization, tracking and separation, speech recognition and generation on a custom-made interactive robot. Designed to be a scientific robot reporter, the robot provides understandable and configurable interaction, intention and information in a conference setting, reporting its experiences for on-line and off-line analysis. This paper presents the integration of these capabilities on this robot, revealing interesting new issues in terms of planning and scheduling, coordination of audio/visual/graphical capabilities, and monitoring the uses and impacts of the robot's decisional capabilities in unconstrained operating conditions. This paper also outlines new design requirements for our next design iteration, adding compliance to the locomotion and manipulation capabilities of the platform, and natural interaction through gestures using arms, facial expressions and the robot's pose.
Read full abstract